Test Report: KVM_Linux_crio 22112

                    
                      236742b414df344dfb04283ee96fef673bd34cb2:2025-12-12:42745
                    
                

Test fail (3/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 159.37
345 TestPreload 144.9
394 TestPause/serial/SecondStartNoReconfiguration 53.43
x
+
TestAddons/parallel/Ingress (159.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-347541 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-347541 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-347541 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [28bd2e4c-a606-45ae-bff8-93cc740702b2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [28bd2e4c-a606-45ae-bff8-93cc740702b2] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006005749s
I1212 19:33:05.320103  139995 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-347541 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.250094556s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-347541 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.202
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-347541 -n addons-347541
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 logs -n 25: (1.085119922s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-167722                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-167722 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ --download-only -p binary-mirror-604879 --alsologtostderr --binary-mirror http://127.0.0.1:35119 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-604879 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	│ delete  │ -p binary-mirror-604879                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-604879 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ addons  │ disable dashboard -p addons-347541                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:30 UTC │                     │
	│ addons  │ enable dashboard -p addons-347541                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:30 UTC │                     │
	│ start   │ -p addons-347541 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:30 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ addons-347541 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ addons-347541 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ enable headlamp -p addons-347541 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ addons-347541 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ addons-347541 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ addons-347541 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ addons-347541 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ ip      │ addons-347541 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ addons-347541 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ addons-347541 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:32 UTC │ 12 Dec 25 19:32 UTC │
	│ addons  │ addons-347541 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
	│ ssh     │ addons-347541 ssh cat /opt/local-path-provisioner/pvc-c45c01d7-a7ea-4447-bcca-5299d5d7b030_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
	│ addons  │ addons-347541 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-347541                                                                                                                                                                                                                                                                                                                                                                                         │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
	│ ssh     │ addons-347541 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │                     │
	│ addons  │ addons-347541 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
	│ addons  │ addons-347541 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
	│ addons  │ addons-347541 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:33 UTC │ 12 Dec 25 19:33 UTC │
	│ ip      │ addons-347541 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-347541        │ jenkins │ v1.37.0 │ 12 Dec 25 19:35 UTC │ 12 Dec 25 19:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:30:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:30:00.668485  140968 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:30:00.668763  140968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:30:00.668773  140968 out.go:374] Setting ErrFile to fd 2...
	I1212 19:30:00.668780  140968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:30:00.668997  140968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 19:30:00.669563  140968 out.go:368] Setting JSON to false
	I1212 19:30:00.670427  140968 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4341,"bootTime":1765563460,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:30:00.670487  140968 start.go:143] virtualization: kvm guest
	I1212 19:30:00.672071  140968 out.go:179] * [addons-347541] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:30:00.673116  140968 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:30:00.673149  140968 notify.go:221] Checking for updates...
	I1212 19:30:00.675285  140968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:30:00.676263  140968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 19:30:00.677190  140968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:30:00.678126  140968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:30:00.678976  140968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:30:00.680046  140968 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:30:00.711486  140968 out.go:179] * Using the kvm2 driver based on user configuration
	I1212 19:30:00.712384  140968 start.go:309] selected driver: kvm2
	I1212 19:30:00.712399  140968 start.go:927] validating driver "kvm2" against <nil>
	I1212 19:30:00.712415  140968 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:30:00.713102  140968 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:30:00.713378  140968 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:30:00.713409  140968 cni.go:84] Creating CNI manager for ""
	I1212 19:30:00.713462  140968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:30:00.713474  140968 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 19:30:00.713537  140968 start.go:353] cluster config:
	{Name:addons-347541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1212 19:30:00.713666  140968 iso.go:125] acquiring lock: {Name:mka604e7c5a779b48764eb6b2b4a8a1c6683346a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:30:00.715408  140968 out.go:179] * Starting "addons-347541" primary control-plane node in "addons-347541" cluster
	I1212 19:30:00.716457  140968 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:30:00.716486  140968 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 19:30:00.716508  140968 cache.go:65] Caching tarball of preloaded images
	I1212 19:30:00.716585  140968 preload.go:238] Found /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 19:30:00.716596  140968 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 19:30:00.716890  140968 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/config.json ...
	I1212 19:30:00.716909  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/config.json: {Name:mk7b29990bece5ef9fb6739e4abf70fe5f6174b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:00.717047  140968 start.go:360] acquireMachinesLock for addons-347541: {Name:mk1985c179f459a7b1b82780fe7717dfacfba5d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 19:30:00.717551  140968 start.go:364] duration metric: took 489.279µs to acquireMachinesLock for "addons-347541"
	I1212 19:30:00.717597  140968 start.go:93] Provisioning new machine with config: &{Name:addons-347541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 19:30:00.717652  140968 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 19:30:00.718911  140968 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1212 19:30:00.719098  140968 start.go:159] libmachine.API.Create for "addons-347541" (driver="kvm2")
	I1212 19:30:00.719148  140968 client.go:173] LocalClient.Create starting
	I1212 19:30:00.719227  140968 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem
	I1212 19:30:00.905026  140968 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem
	I1212 19:30:01.068487  140968 main.go:143] libmachine: creating domain...
	I1212 19:30:01.068509  140968 main.go:143] libmachine: creating network...
	I1212 19:30:01.069862  140968 main.go:143] libmachine: found existing default network
	I1212 19:30:01.070154  140968 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1212 19:30:01.071312  140968 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001975d50}
	I1212 19:30:01.071422  140968 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-347541</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1212 19:30:01.077187  140968 main.go:143] libmachine: creating private network mk-addons-347541 192.168.39.0/24...
	I1212 19:30:01.142955  140968 main.go:143] libmachine: private network mk-addons-347541 192.168.39.0/24 created
	I1212 19:30:01.143260  140968 main.go:143] libmachine: <network>
	  <name>mk-addons-347541</name>
	  <uuid>d48b8b9e-0d8d-48e3-b817-290b59763518</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:90:63:48'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1212 19:30:01.143297  140968 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541 ...
	I1212 19:30:01.143320  140968 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22112-135957/.minikube/cache/iso/amd64/minikube-v1.37.0-1765505725-22112-amd64.iso
	I1212 19:30:01.143331  140968 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:30:01.143398  140968 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22112-135957/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22112-135957/.minikube/cache/iso/amd64/minikube-v1.37.0-1765505725-22112-amd64.iso...
	I1212 19:30:01.446361  140968 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa...
	I1212 19:30:01.631234  140968 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/addons-347541.rawdisk...
	I1212 19:30:01.631286  140968 main.go:143] libmachine: Writing magic tar header
	I1212 19:30:01.631338  140968 main.go:143] libmachine: Writing SSH key tar header
	I1212 19:30:01.631425  140968 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541 ...
	I1212 19:30:01.631485  140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541
	I1212 19:30:01.631532  140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541 (perms=drwx------)
	I1212 19:30:01.631552  140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube/machines
	I1212 19:30:01.631564  140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube/machines (perms=drwxr-xr-x)
	I1212 19:30:01.631576  140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:30:01.631587  140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube (perms=drwxr-xr-x)
	I1212 19:30:01.631597  140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957
	I1212 19:30:01.631612  140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957 (perms=drwxrwxr-x)
	I1212 19:30:01.631625  140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1212 19:30:01.631634  140968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 19:30:01.631644  140968 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1212 19:30:01.631652  140968 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 19:30:01.631663  140968 main.go:143] libmachine: checking permissions on dir: /home
	I1212 19:30:01.631670  140968 main.go:143] libmachine: skipping /home - not owner
	I1212 19:30:01.631674  140968 main.go:143] libmachine: defining domain...
	I1212 19:30:01.633057  140968 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-347541</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/addons-347541.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-347541'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1212 19:30:01.640422  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:e1:43:7d in network default
	I1212 19:30:01.641086  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:01.641119  140968 main.go:143] libmachine: starting domain...
	I1212 19:30:01.641138  140968 main.go:143] libmachine: ensuring networks are active...
	I1212 19:30:01.641958  140968 main.go:143] libmachine: Ensuring network default is active
	I1212 19:30:01.642432  140968 main.go:143] libmachine: Ensuring network mk-addons-347541 is active
	I1212 19:30:01.643370  140968 main.go:143] libmachine: getting domain XML...
	I1212 19:30:01.644527  140968 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-347541</name>
	  <uuid>b1fb684f-da1f-4675-9f4b-aa96973add54</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/addons-347541.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a9:57:3c'/>
	      <source network='mk-addons-347541'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e1:43:7d'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1212 19:30:02.947936  140968 main.go:143] libmachine: waiting for domain to start...
	I1212 19:30:02.949380  140968 main.go:143] libmachine: domain is now running
	I1212 19:30:02.949407  140968 main.go:143] libmachine: waiting for IP...
	I1212 19:30:02.950285  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:02.950778  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:02.950801  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:02.951063  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:02.951166  140968 retry.go:31] will retry after 298.326026ms: waiting for domain to come up
	I1212 19:30:03.250759  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:03.251277  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:03.251294  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:03.251572  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:03.251615  140968 retry.go:31] will retry after 259.086026ms: waiting for domain to come up
	I1212 19:30:03.512156  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:03.512724  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:03.512746  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:03.513042  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:03.513081  140968 retry.go:31] will retry after 460.175214ms: waiting for domain to come up
	I1212 19:30:03.974664  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:03.975165  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:03.975184  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:03.975533  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:03.975568  140968 retry.go:31] will retry after 478.456546ms: waiting for domain to come up
	I1212 19:30:04.455201  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:04.455741  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:04.455759  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:04.456016  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:04.456060  140968 retry.go:31] will retry after 486.30307ms: waiting for domain to come up
	I1212 19:30:04.943756  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:04.944287  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:04.944304  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:04.944556  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:04.944590  140968 retry.go:31] will retry after 848.999206ms: waiting for domain to come up
	I1212 19:30:05.795770  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:05.796357  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:05.796376  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:05.796673  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:05.796708  140968 retry.go:31] will retry after 845.582774ms: waiting for domain to come up
	I1212 19:30:06.644411  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:06.644945  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:06.644966  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:06.645286  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:06.645326  140968 retry.go:31] will retry after 1.081306031s: waiting for domain to come up
	I1212 19:30:07.728673  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:07.729177  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:07.729193  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:07.729452  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:07.729496  140968 retry.go:31] will retry after 1.620619119s: waiting for domain to come up
	I1212 19:30:09.351356  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:09.351854  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:09.351872  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:09.352157  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:09.352201  140968 retry.go:31] will retry after 1.817980315s: waiting for domain to come up
	I1212 19:30:11.171361  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:11.171930  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:11.171943  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:11.172278  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:11.172313  140968 retry.go:31] will retry after 2.176390828s: waiting for domain to come up
	I1212 19:30:13.351471  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:13.351920  140968 main.go:143] libmachine: no network interface addresses found for domain addons-347541 (source=lease)
	I1212 19:30:13.351936  140968 main.go:143] libmachine: trying to list again with source=arp
	I1212 19:30:13.352208  140968 main.go:143] libmachine: unable to find current IP address of domain addons-347541 in network mk-addons-347541 (interfaces detected: [])
	I1212 19:30:13.352242  140968 retry.go:31] will retry after 3.340610976s: waiting for domain to come up
	I1212 19:30:16.694012  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:16.694521  140968 main.go:143] libmachine: domain addons-347541 has current primary IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:16.694537  140968 main.go:143] libmachine: found domain IP: 192.168.39.202
	I1212 19:30:16.694558  140968 main.go:143] libmachine: reserving static IP address...
	I1212 19:30:16.694894  140968 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-347541", mac: "52:54:00:a9:57:3c", ip: "192.168.39.202"} in network mk-addons-347541
	I1212 19:30:16.877709  140968 main.go:143] libmachine: reserved static IP address 192.168.39.202 for domain addons-347541
	I1212 19:30:16.877734  140968 main.go:143] libmachine: waiting for SSH...
	I1212 19:30:16.877743  140968 main.go:143] libmachine: Getting to WaitForSSH function...
	I1212 19:30:16.880346  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:16.880735  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:16.880764  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:16.881059  140968 main.go:143] libmachine: Using SSH client type: native
	I1212 19:30:16.881404  140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 19:30:16.881418  140968 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1212 19:30:16.994373  140968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:30:16.994816  140968 main.go:143] libmachine: domain creation complete
	I1212 19:30:16.996372  140968 machine.go:94] provisionDockerMachine start ...
	I1212 19:30:16.998598  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:16.998992  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:16.999021  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:16.999174  140968 main.go:143] libmachine: Using SSH client type: native
	I1212 19:30:16.999375  140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 19:30:16.999386  140968 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 19:30:17.112605  140968 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 19:30:17.112640  140968 buildroot.go:166] provisioning hostname "addons-347541"
	I1212 19:30:17.115802  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.116191  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:17.116218  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.116461  140968 main.go:143] libmachine: Using SSH client type: native
	I1212 19:30:17.116702  140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 19:30:17.116717  140968 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-347541 && echo "addons-347541" | sudo tee /etc/hostname
	I1212 19:30:17.252517  140968 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-347541
	
	I1212 19:30:17.255158  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.255631  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:17.255661  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.255831  140968 main.go:143] libmachine: Using SSH client type: native
	I1212 19:30:17.256099  140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 19:30:17.256142  140968 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-347541' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-347541/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-347541' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 19:30:17.381776  140968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 19:30:17.381807  140968 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22112-135957/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-135957/.minikube}
	I1212 19:30:17.381849  140968 buildroot.go:174] setting up certificates
	I1212 19:30:17.381861  140968 provision.go:84] configureAuth start
	I1212 19:30:17.384712  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.385180  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:17.385205  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.387466  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.387751  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:17.387768  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.387888  140968 provision.go:143] copyHostCerts
	I1212 19:30:17.387975  140968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/cert.pem (1123 bytes)
	I1212 19:30:17.388092  140968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/key.pem (1675 bytes)
	I1212 19:30:17.388164  140968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/ca.pem (1078 bytes)
	I1212 19:30:17.389003  140968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem org=jenkins.addons-347541 san=[127.0.0.1 192.168.39.202 addons-347541 localhost minikube]
	I1212 19:30:17.605876  140968 provision.go:177] copyRemoteCerts
	I1212 19:30:17.605950  140968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 19:30:17.608569  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.608928  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:17.608959  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.609125  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:17.698479  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 19:30:17.728262  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 19:30:17.757028  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 19:30:17.787316  140968 provision.go:87] duration metric: took 405.415343ms to configureAuth
	I1212 19:30:17.787351  140968 buildroot.go:189] setting minikube options for container-runtime
	I1212 19:30:17.787557  140968 config.go:182] Loaded profile config "addons-347541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:30:17.790502  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.790905  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:17.790931  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:17.791159  140968 main.go:143] libmachine: Using SSH client type: native
	I1212 19:30:17.791407  140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 19:30:17.791425  140968 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 19:30:18.324560  140968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 19:30:18.324596  140968 machine.go:97] duration metric: took 1.328200526s to provisionDockerMachine
	I1212 19:30:18.324609  140968 client.go:176] duration metric: took 17.605450349s to LocalClient.Create
	I1212 19:30:18.324630  140968 start.go:167] duration metric: took 17.605532959s to libmachine.API.Create "addons-347541"
	I1212 19:30:18.324665  140968 start.go:293] postStartSetup for "addons-347541" (driver="kvm2")
	I1212 19:30:18.324683  140968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 19:30:18.324775  140968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 19:30:18.327501  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.327852  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:18.327871  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.327987  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:18.415516  140968 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 19:30:18.419934  140968 info.go:137] Remote host: Buildroot 2025.02
	I1212 19:30:18.419960  140968 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/addons for local assets ...
	I1212 19:30:18.420044  140968 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/files for local assets ...
	I1212 19:30:18.420080  140968 start.go:296] duration metric: took 95.403261ms for postStartSetup
	I1212 19:30:18.423197  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.423594  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:18.423624  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.423854  140968 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/config.json ...
	I1212 19:30:18.424056  140968 start.go:128] duration metric: took 17.706391968s to createHost
	I1212 19:30:18.426148  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.426521  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:18.426550  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.426714  140968 main.go:143] libmachine: Using SSH client type: native
	I1212 19:30:18.426937  140968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1212 19:30:18.426950  140968 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1212 19:30:18.541163  140968 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765567818.501135341
	
	I1212 19:30:18.541187  140968 fix.go:216] guest clock: 1765567818.501135341
	I1212 19:30:18.541196  140968 fix.go:229] Guest: 2025-12-12 19:30:18.501135341 +0000 UTC Remote: 2025-12-12 19:30:18.424088593 +0000 UTC m=+17.805219101 (delta=77.046748ms)
	I1212 19:30:18.541222  140968 fix.go:200] guest clock delta is within tolerance: 77.046748ms
	I1212 19:30:18.541229  140968 start.go:83] releasing machines lock for "addons-347541", held for 17.823662986s
	I1212 19:30:18.543967  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.544357  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:18.544381  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.544911  140968 ssh_runner.go:195] Run: cat /version.json
	I1212 19:30:18.544985  140968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 19:30:18.548144  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.548260  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.548589  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:18.548656  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:18.548684  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.548721  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:18.548882  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:18.549023  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:18.657618  140968 ssh_runner.go:195] Run: systemctl --version
	I1212 19:30:18.663841  140968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 19:30:18.817799  140968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 19:30:18.825475  140968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 19:30:18.825553  140968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 19:30:18.845765  140968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 19:30:18.845791  140968 start.go:496] detecting cgroup driver to use...
	I1212 19:30:18.845876  140968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 19:30:18.867508  140968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 19:30:18.884489  140968 docker.go:218] disabling cri-docker service (if available) ...
	I1212 19:30:18.884573  140968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 19:30:18.902820  140968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 19:30:18.919515  140968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 19:30:19.072250  140968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 19:30:19.289398  140968 docker.go:234] disabling docker service ...
	I1212 19:30:19.289463  140968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 19:30:19.305224  140968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 19:30:19.319946  140968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 19:30:19.483025  140968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 19:30:19.628651  140968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 19:30:19.644709  140968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 19:30:19.666219  140968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 19:30:19.666282  140968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:30:19.678574  140968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 19:30:19.678637  140968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:30:19.689982  140968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:30:19.701617  140968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:30:19.713495  140968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 19:30:19.725720  140968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:30:19.737186  140968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:30:19.757536  140968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 19:30:19.769399  140968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 19:30:19.779271  140968 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 19:30:19.779330  140968 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 19:30:19.800843  140968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 19:30:19.813428  140968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:30:19.959567  140968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 19:30:20.066130  140968 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 19:30:20.066263  140968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 19:30:20.071997  140968 start.go:564] Will wait 60s for crictl version
	I1212 19:30:20.072081  140968 ssh_runner.go:195] Run: which crictl
	I1212 19:30:20.076021  140968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 19:30:20.110823  140968 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 19:30:20.110955  140968 ssh_runner.go:195] Run: crio --version
	I1212 19:30:20.138662  140968 ssh_runner.go:195] Run: crio --version
	I1212 19:30:20.168121  140968 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1212 19:30:20.172030  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:20.172398  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:20.172421  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:20.172607  140968 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 19:30:20.177235  140968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:30:20.192254  140968 kubeadm.go:884] updating cluster {Name:addons-347541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 19:30:20.192405  140968 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:30:20.192463  140968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:30:20.222073  140968 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1212 19:30:20.222180  140968 ssh_runner.go:195] Run: which lz4
	I1212 19:30:20.226629  140968 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 19:30:20.231403  140968 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 19:30:20.231444  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1212 19:30:21.372447  140968 crio.go:462] duration metric: took 1.145870197s to copy over tarball
	I1212 19:30:21.372575  140968 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 19:30:23.093577  140968 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.720959335s)
	I1212 19:30:23.093613  140968 crio.go:469] duration metric: took 1.721123252s to extract the tarball
	I1212 19:30:23.093622  140968 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 19:30:23.130289  140968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 19:30:23.169285  140968 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 19:30:23.169313  140968 cache_images.go:86] Images are preloaded, skipping loading
	I1212 19:30:23.169324  140968 kubeadm.go:935] updating node { 192.168.39.202 8443 v1.34.2 crio true true} ...
	I1212 19:30:23.169456  140968 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-347541 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 19:30:23.169531  140968 ssh_runner.go:195] Run: crio config
	I1212 19:30:23.216098  140968 cni.go:84] Creating CNI manager for ""
	I1212 19:30:23.216134  140968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:30:23.216157  140968 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 19:30:23.216196  140968 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-347541 NodeName:addons-347541 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 19:30:23.216324  140968 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-347541"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.202"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 19:30:23.216387  140968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 19:30:23.228847  140968 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 19:30:23.228944  140968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 19:30:23.241485  140968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1212 19:30:23.262387  140968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 19:30:23.283470  140968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1212 19:30:23.304411  140968 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I1212 19:30:23.308528  140968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 19:30:23.323463  140968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 19:30:23.464842  140968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:30:23.484934  140968 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541 for IP: 192.168.39.202
	I1212 19:30:23.484985  140968 certs.go:195] generating shared ca certs ...
	I1212 19:30:23.485011  140968 certs.go:227] acquiring lock for ca certs: {Name:mk856e15c7830c27b8e705838c72180e3414c0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.485194  140968 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key
	I1212 19:30:23.563998  140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt ...
	I1212 19:30:23.564032  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt: {Name:mk18cfabcdb3a68d046e7a8c89c35160dc36f4ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.564819  140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key ...
	I1212 19:30:23.564838  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key: {Name:mk47a607b7e1d4fe7cd7ac22805d30141927b16d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.565292  140968 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key
	I1212 19:30:23.617265  140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt ...
	I1212 19:30:23.617302  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt: {Name:mk85dbc3c74242157ff9f330c6deabfc77aec2e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.618098  140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key ...
	I1212 19:30:23.618142  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key: {Name:mkde9c4df31a46fde4189054105ffdc3f6362e6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.618304  140968 certs.go:257] generating profile certs ...
	I1212 19:30:23.618370  140968 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.key
	I1212 19:30:23.618397  140968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt with IP's: []
	I1212 19:30:23.771361  140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt ...
	I1212 19:30:23.771400  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: {Name:mkabf5ff19b68483714d8347866512a978f4ba2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.771586  140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.key ...
	I1212 19:30:23.771598  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.key: {Name:mk6047470d3c978be16f7b1d2eed436c1b281da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.772194  140968 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key.ed0663fe
	I1212 19:30:23.772223  140968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt.ed0663fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.202]
	I1212 19:30:23.826537  140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt.ed0663fe ...
	I1212 19:30:23.826570  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt.ed0663fe: {Name:mk263cdb39097ad588559d4bf43d83e7f753e8a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.826741  140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key.ed0663fe ...
	I1212 19:30:23.826758  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key.ed0663fe: {Name:mk0a31db8094fd8f08b871bfe87ac103b9347e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.827459  140968 certs.go:382] copying /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt.ed0663fe -> /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt
	I1212 19:30:23.827544  140968 certs.go:386] copying /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key.ed0663fe -> /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key
	I1212 19:30:23.827592  140968 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.key
	I1212 19:30:23.827612  140968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.crt with IP's: []
	I1212 19:30:23.984075  140968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.crt ...
	I1212 19:30:23.984118  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.crt: {Name:mkd556fe950bdc660e1b7357de69d4068f78044e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.984337  140968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.key ...
	I1212 19:30:23.984355  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.key: {Name:mka17887116c1f5f6d129bb865f71a16e35db1da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:23.984572  140968 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 19:30:23.984617  140968 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem (1078 bytes)
	I1212 19:30:23.984645  140968 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem (1123 bytes)
	I1212 19:30:23.984671  140968 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem (1675 bytes)
	I1212 19:30:23.985337  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 19:30:24.015999  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 19:30:24.045956  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 19:30:24.075506  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 19:30:24.105700  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 19:30:24.136977  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 19:30:24.184635  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 19:30:24.222985  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 19:30:24.253538  140968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 19:30:24.283311  140968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 19:30:24.304150  140968 ssh_runner.go:195] Run: openssl version
	I1212 19:30:24.310534  140968 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:30:24.322534  140968 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 19:30:24.334552  140968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:30:24.339769  140968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:30:24.339845  140968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 19:30:24.347228  140968 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 19:30:24.359091  140968 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 19:30:24.371148  140968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 19:30:24.375947  140968 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 19:30:24.376021  140968 kubeadm.go:401] StartCluster: {Name:addons-347541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-347541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:30:24.376094  140968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 19:30:24.376188  140968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 19:30:24.407349  140968 cri.go:89] found id: ""
	I1212 19:30:24.407431  140968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 19:30:24.419992  140968 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 19:30:24.432630  140968 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 19:30:24.444916  140968 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 19:30:24.444936  140968 kubeadm.go:158] found existing configuration files:
	
	I1212 19:30:24.445000  140968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 19:30:24.456502  140968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 19:30:24.456572  140968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 19:30:24.468621  140968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 19:30:24.479764  140968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 19:30:24.479829  140968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 19:30:24.491684  140968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 19:30:24.502606  140968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 19:30:24.502672  140968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 19:30:24.514307  140968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 19:30:24.524777  140968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 19:30:24.524845  140968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 19:30:24.536512  140968 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 19:30:24.680901  140968 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 19:30:37.730772  140968 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 19:30:37.730831  140968 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 19:30:37.730913  140968 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 19:30:37.731103  140968 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 19:30:37.731280  140968 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 19:30:37.731362  140968 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 19:30:37.732736  140968 out.go:252]   - Generating certificates and keys ...
	I1212 19:30:37.732837  140968 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 19:30:37.732924  140968 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 19:30:37.733033  140968 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 19:30:37.733138  140968 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 19:30:37.733230  140968 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 19:30:37.733346  140968 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 19:30:37.733446  140968 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 19:30:37.733612  140968 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-347541 localhost] and IPs [192.168.39.202 127.0.0.1 ::1]
	I1212 19:30:37.733684  140968 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 19:30:37.733858  140968 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-347541 localhost] and IPs [192.168.39.202 127.0.0.1 ::1]
	I1212 19:30:37.733950  140968 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 19:30:37.734038  140968 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 19:30:37.734103  140968 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 19:30:37.734210  140968 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 19:30:37.734291  140968 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 19:30:37.734368  140968 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 19:30:37.734448  140968 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 19:30:37.734534  140968 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 19:30:37.734625  140968 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 19:30:37.734729  140968 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 19:30:37.734823  140968 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 19:30:37.736341  140968 out.go:252]   - Booting up control plane ...
	I1212 19:30:37.736471  140968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 19:30:37.736603  140968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 19:30:37.736661  140968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 19:30:37.736761  140968 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 19:30:37.736838  140968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 19:30:37.736921  140968 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 19:30:37.736988  140968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 19:30:37.737020  140968 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 19:30:37.737180  140968 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 19:30:37.737267  140968 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 19:30:37.737319  140968 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501137122s
	I1212 19:30:37.737394  140968 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 19:30:37.737463  140968 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.202:8443/livez
	I1212 19:30:37.737541  140968 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 19:30:37.737608  140968 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 19:30:37.737703  140968 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.669943413s
	I1212 19:30:37.737823  140968 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.710932375s
	I1212 19:30:37.737921  140968 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502318695s
	I1212 19:30:37.738042  140968 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 19:30:37.738222  140968 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 19:30:37.738306  140968 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 19:30:37.738499  140968 kubeadm.go:319] [mark-control-plane] Marking the node addons-347541 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 19:30:37.738561  140968 kubeadm.go:319] [bootstrap-token] Using token: 5xyxrx.8cc9hzhgxpkclftb
	I1212 19:30:37.740549  140968 out.go:252]   - Configuring RBAC rules ...
	I1212 19:30:37.740668  140968 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 19:30:37.740760  140968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 19:30:37.740931  140968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 19:30:37.741079  140968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 19:30:37.741247  140968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 19:30:37.741368  140968 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 19:30:37.741508  140968 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 19:30:37.741570  140968 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 19:30:37.741637  140968 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 19:30:37.741649  140968 kubeadm.go:319] 
	I1212 19:30:37.741699  140968 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 19:30:37.741709  140968 kubeadm.go:319] 
	I1212 19:30:37.741767  140968 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 19:30:37.741773  140968 kubeadm.go:319] 
	I1212 19:30:37.741793  140968 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 19:30:37.741840  140968 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 19:30:37.741894  140968 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 19:30:37.741902  140968 kubeadm.go:319] 
	I1212 19:30:37.741984  140968 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 19:30:37.741992  140968 kubeadm.go:319] 
	I1212 19:30:37.742056  140968 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 19:30:37.742065  140968 kubeadm.go:319] 
	I1212 19:30:37.742149  140968 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 19:30:37.742254  140968 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 19:30:37.742348  140968 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 19:30:37.742361  140968 kubeadm.go:319] 
	I1212 19:30:37.742461  140968 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 19:30:37.742563  140968 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 19:30:37.742572  140968 kubeadm.go:319] 
	I1212 19:30:37.742673  140968 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5xyxrx.8cc9hzhgxpkclftb \
	I1212 19:30:37.742802  140968 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2a055c2f74563dd017e9ed55ed932d3460a1f443e96894092fdaf892a84e9a9a \
	I1212 19:30:37.742833  140968 kubeadm.go:319] 	--control-plane 
	I1212 19:30:37.742843  140968 kubeadm.go:319] 
	I1212 19:30:37.742941  140968 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 19:30:37.742948  140968 kubeadm.go:319] 
	I1212 19:30:37.743046  140968 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5xyxrx.8cc9hzhgxpkclftb \
	I1212 19:30:37.743219  140968 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2a055c2f74563dd017e9ed55ed932d3460a1f443e96894092fdaf892a84e9a9a 
	I1212 19:30:37.743242  140968 cni.go:84] Creating CNI manager for ""
	I1212 19:30:37.743250  140968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:30:37.744679  140968 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 19:30:37.745792  140968 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 19:30:37.759539  140968 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 19:30:37.786435  140968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 19:30:37.786520  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:37.786548  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-347541 minikube.k8s.io/updated_at=2025_12_12T19_30_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=addons-347541 minikube.k8s.io/primary=true
	I1212 19:30:37.829814  140968 ops.go:34] apiserver oom_adj: -16
	I1212 19:30:37.915474  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:38.416395  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:38.916407  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:39.416473  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:39.916258  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:40.416215  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:40.916138  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:41.416318  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:41.916089  140968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 19:30:42.037154  140968 kubeadm.go:1114] duration metric: took 4.250701029s to wait for elevateKubeSystemPrivileges
	I1212 19:30:42.037242  140968 kubeadm.go:403] duration metric: took 17.661224703s to StartCluster
	I1212 19:30:42.037273  140968 settings.go:142] acquiring lock: {Name:mk2e3b99c7ed93165698abc6c533d079febb6d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:42.037478  140968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 19:30:42.038072  140968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/kubeconfig: {Name:mkab6c8db323de95c4a5daef1e17fdaffcd571ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 19:30:42.038369  140968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 19:30:42.038422  140968 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 19:30:42.038461  140968 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1212 19:30:42.038596  140968 addons.go:70] Setting yakd=true in profile "addons-347541"
	I1212 19:30:42.038616  140968 addons.go:70] Setting inspektor-gadget=true in profile "addons-347541"
	I1212 19:30:42.038630  140968 addons.go:70] Setting storage-provisioner=true in profile "addons-347541"
	I1212 19:30:42.038640  140968 addons.go:239] Setting addon storage-provisioner=true in "addons-347541"
	I1212 19:30:42.038643  140968 addons.go:239] Setting addon inspektor-gadget=true in "addons-347541"
	I1212 19:30:42.038662  140968 addons.go:70] Setting registry-creds=true in profile "addons-347541"
	I1212 19:30:42.038674  140968 config.go:182] Loaded profile config "addons-347541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:30:42.038689  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.038696  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.038707  140968 addons.go:70] Setting metrics-server=true in profile "addons-347541"
	I1212 19:30:42.038720  140968 addons.go:239] Setting addon metrics-server=true in "addons-347541"
	I1212 19:30:42.038732  140968 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-347541"
	I1212 19:30:42.038748  140968 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-347541"
	I1212 19:30:42.038765  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.038754  140968 addons.go:70] Setting default-storageclass=true in profile "addons-347541"
	I1212 19:30:42.038771  140968 addons.go:70] Setting volcano=true in profile "addons-347541"
	I1212 19:30:42.038794  140968 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-347541"
	I1212 19:30:42.038797  140968 addons.go:239] Setting addon volcano=true in "addons-347541"
	I1212 19:30:42.038826  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.039183  140968 addons.go:70] Setting cloud-spanner=true in profile "addons-347541"
	I1212 19:30:42.039207  140968 addons.go:239] Setting addon cloud-spanner=true in "addons-347541"
	I1212 19:30:42.039232  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.039367  140968 addons.go:70] Setting ingress=true in profile "addons-347541"
	I1212 19:30:42.039383  140968 addons.go:239] Setting addon ingress=true in "addons-347541"
	I1212 19:30:42.039425  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.039982  140968 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-347541"
	I1212 19:30:42.040034  140968 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-347541"
	I1212 19:30:42.040064  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.040138  140968 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-347541"
	I1212 19:30:42.040155  140968 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-347541"
	I1212 19:30:42.040176  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.040292  140968 addons.go:70] Setting ingress-dns=true in profile "addons-347541"
	I1212 19:30:42.040310  140968 addons.go:239] Setting addon ingress-dns=true in "addons-347541"
	I1212 19:30:42.040349  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.038622  140968 addons.go:239] Setting addon yakd=true in "addons-347541"
	I1212 19:30:42.040396  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.038698  140968 addons.go:239] Setting addon registry-creds=true in "addons-347541"
	I1212 19:30:42.040715  140968 addons.go:70] Setting gcp-auth=true in profile "addons-347541"
	I1212 19:30:42.040720  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.040764  140968 mustload.go:66] Loading cluster: addons-347541
	I1212 19:30:42.040994  140968 config.go:182] Loaded profile config "addons-347541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:30:42.041017  140968 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-347541"
	I1212 19:30:42.041068  140968 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-347541"
	I1212 19:30:42.041103  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.041156  140968 addons.go:70] Setting registry=true in profile "addons-347541"
	I1212 19:30:42.041170  140968 addons.go:239] Setting addon registry=true in "addons-347541"
	I1212 19:30:42.041188  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.041365  140968 out.go:179] * Verifying Kubernetes components...
	I1212 19:30:42.041420  140968 addons.go:70] Setting volumesnapshots=true in profile "addons-347541"
	I1212 19:30:42.041443  140968 addons.go:239] Setting addon volumesnapshots=true in "addons-347541"
	I1212 19:30:42.041474  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.042606  140968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W1212 19:30:42.045206  140968 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1212 19:30:42.047241  140968 addons.go:239] Setting addon default-storageclass=true in "addons-347541"
	I1212 19:30:42.047275  140968 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-347541"
	I1212 19:30:42.047282  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.047314  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.047514  140968 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1212 19:30:42.048639  140968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 19:30:42.048663  140968 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1212 19:30:42.048665  140968 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 19:30:42.048766  140968 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 19:30:42.048689  140968 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 19:30:42.048642  140968 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1212 19:30:42.050304  140968 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:30:42.050326  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 19:30:42.050424  140968 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1212 19:30:42.050442  140968 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1212 19:30:42.050417  140968 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1212 19:30:42.050561  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 19:30:42.050308  140968 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1212 19:30:42.051006  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1212 19:30:42.050453  140968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 19:30:42.050465  140968 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1212 19:30:42.050975  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:42.051938  140968 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 19:30:42.051958  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1212 19:30:42.053507  140968 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1212 19:30:42.053529  140968 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1212 19:30:42.053529  140968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1212 19:30:42.053551  140968 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 19:30:42.053997  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1212 19:30:42.053560  140968 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1212 19:30:42.053670  140968 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 19:30:42.054325  140968 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 19:30:42.054243  140968 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1212 19:30:42.054243  140968 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1212 19:30:42.054246  140968 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 19:30:42.054996  140968 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 19:30:42.055087  140968 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 19:30:42.055374  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1212 19:30:42.055686  140968 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 19:30:42.055704  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 19:30:42.056299  140968 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 19:30:42.056380  140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 19:30:42.056400  140968 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 19:30:42.056387  140968 out.go:179]   - Using image docker.io/registry:3.0.0
	I1212 19:30:42.056471  140968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 19:30:42.058159  140968 out.go:179]   - Using image docker.io/busybox:stable
	I1212 19:30:42.058180  140968 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 19:30:42.058259  140968 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 19:30:42.058448  140968 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 19:30:42.058473  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1212 19:30:42.058447  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1212 19:30:42.059434  140968 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 19:30:42.059463  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 19:30:42.059965  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.060711  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.061003  140968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 19:30:42.061354  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.061988  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.062024  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.062297  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.062332  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.062461  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.063150  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.063323  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.063357  140968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 19:30:42.063444  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.063478  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.063578  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.064144  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.065023  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.065065  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.065594  140968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 19:30:42.065594  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.065670  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.065708  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.065850  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.066536  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.066737  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.067401  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.067415  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.067499  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.068014  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.068060  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.068097  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.068459  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.068491  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.068505  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.068610  140968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 19:30:42.068864  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.068896  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.069045  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.069049  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.069795  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.069827  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.069834  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.069863  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.069960  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.070060  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.070150  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.070187  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.070313  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.070579  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.070931  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.070960  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.070968  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.071126  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.071217  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.071474  140968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 19:30:42.071809  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.071832  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.071850  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.071865  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.072037  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.072268  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:42.072509  140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 19:30:42.072527  140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 19:30:42.074808  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.075214  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:42.075255  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:42.075465  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	W1212 19:30:42.498183  140968 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:45836->192.168.39.202:22: read: connection reset by peer
	I1212 19:30:42.498224  140968 retry.go:31] will retry after 307.207548ms: ssh: handshake failed: read tcp 192.168.39.1:45836->192.168.39.202:22: read: connection reset by peer
	I1212 19:30:42.788735  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 19:30:42.882868  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 19:30:43.007536  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 19:30:43.030545  140968 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1212 19:30:43.030570  140968 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1212 19:30:43.037875  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 19:30:43.043916  140968 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 19:30:43.043938  140968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 19:30:43.045010  140968 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 19:30:43.045028  140968 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 19:30:43.076915  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1212 19:30:43.091459  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 19:30:43.135445  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 19:30:43.161314  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 19:30:43.228759  140968 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 19:30:43.228791  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 19:30:43.256924  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 19:30:43.316411  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 19:30:43.372722  140968 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.334319389s)
	I1212 19:30:43.372845  140968 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.330204686s)
	I1212 19:30:43.372944  140968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 19:30:43.372943  140968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 19:30:43.537404  140968 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1212 19:30:43.537440  140968 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1212 19:30:43.642927  140968 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 19:30:43.642954  140968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 19:30:43.806680  140968 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 19:30:43.806715  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 19:30:43.933963  140968 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 19:30:43.933999  140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 19:30:43.982194  140968 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1212 19:30:43.982232  140968 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1212 19:30:44.006492  140968 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 19:30:44.006520  140968 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 19:30:44.047583  140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 19:30:44.047621  140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 19:30:44.059717  140968 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1212 19:30:44.059745  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1212 19:30:44.071183  140968 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 19:30:44.071214  140968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 19:30:44.171789  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 19:30:44.366074  140968 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 19:30:44.366121  140968 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 19:30:44.376527  140968 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 19:30:44.376551  140968 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 19:30:44.398265  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1212 19:30:44.448003  140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 19:30:44.448030  140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 19:30:44.544019  140968 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:30:44.544047  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 19:30:44.571554  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 19:30:44.712676  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:30:44.898418  140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 19:30:44.898462  140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 19:30:45.105077  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.316291183s)
	I1212 19:30:45.278698  140968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 19:30:45.278734  140968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 19:30:45.888452  140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 19:30:45.888492  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 19:30:46.324416  140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 19:30:46.324451  140968 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 19:30:46.659047  140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 19:30:46.659073  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 19:30:46.894583  140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 19:30:46.894608  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 19:30:47.380602  140968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 19:30:47.380630  140968 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 19:30:47.642669  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 19:30:48.456497  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.573587528s)
	I1212 19:30:48.456594  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.449019797s)
	I1212 19:30:48.456673  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.41876682s)
	I1212 19:30:48.495601  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.418638209s)
	I1212 19:30:48.495665  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.40417141s)
	I1212 19:30:49.485218  140968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 19:30:49.488385  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:49.488853  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:49.488887  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:49.489054  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:49.891240  140968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 19:30:50.021210  140968 addons.go:239] Setting addon gcp-auth=true in "addons-347541"
	I1212 19:30:50.021279  140968 host.go:66] Checking if "addons-347541" exists ...
	I1212 19:30:50.023291  140968 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 19:30:50.026057  140968 main.go:143] libmachine: domain addons-347541 has defined MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:50.026518  140968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a9:57:3c", ip: ""} in network mk-addons-347541: {Iface:virbr1 ExpiryTime:2025-12-12 20:30:16 +0000 UTC Type:0 Mac:52:54:00:a9:57:3c Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:addons-347541 Clientid:01:52:54:00:a9:57:3c}
	I1212 19:30:50.026550  140968 main.go:143] libmachine: domain addons-347541 has defined IP address 192.168.39.202 and MAC address 52:54:00:a9:57:3c in network mk-addons-347541
	I1212 19:30:50.026719  140968 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/addons-347541/id_rsa Username:docker}
	I1212 19:30:51.390040  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.254520086s)
	I1212 19:30:51.390090  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.228733885s)
	I1212 19:30:51.390101  140968 addons.go:495] Verifying addon ingress=true in "addons-347541"
	I1212 19:30:51.390143  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.133179371s)
	I1212 19:30:51.390194  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (8.073741466s)
	I1212 19:30:51.390251  140968 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.017214716s)
	I1212 19:30:51.390278  140968 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1212 19:30:51.390233  140968 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.017269992s)
	I1212 19:30:51.390355  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.218535791s)
	I1212 19:30:51.390376  140968 addons.go:495] Verifying addon registry=true in "addons-347541"
	I1212 19:30:51.390444  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.992113142s)
	I1212 19:30:51.390499  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.818908881s)
	I1212 19:30:51.390524  140968 addons.go:495] Verifying addon metrics-server=true in "addons-347541"
	I1212 19:30:51.390626  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.677917582s)
	W1212 19:30:51.391126  140968 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 19:30:51.391154  140968 retry.go:31] will retry after 264.780265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 19:30:51.391425  140968 out.go:179] * Verifying registry addon...
	I1212 19:30:51.391426  140968 out.go:179] * Verifying ingress addon...
	I1212 19:30:51.391488  140968 node_ready.go:35] waiting up to 6m0s for node "addons-347541" to be "Ready" ...
	I1212 19:30:51.392152  140968 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-347541 service yakd-dashboard -n yakd-dashboard
	
	I1212 19:30:51.394251  140968 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 19:30:51.394319  140968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 19:30:51.444483  140968 node_ready.go:49] node "addons-347541" is "Ready"
	I1212 19:30:51.444516  140968 node_ready.go:38] duration metric: took 52.68821ms for node "addons-347541" to be "Ready" ...
	I1212 19:30:51.444533  140968 api_server.go:52] waiting for apiserver process to appear ...
	I1212 19:30:51.444594  140968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:30:51.463301  140968 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 19:30:51.463337  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:51.463301  140968 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 19:30:51.463361  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1212 19:30:51.478349  140968 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1212 19:30:51.656538  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 19:30:51.905878  140968 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-347541" context rescaled to 1 replicas
	I1212 19:30:51.907526  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:51.908023  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:52.427666  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:52.427756  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:52.610896  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.968152855s)
	I1212 19:30:52.610931  140968 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.587609311s)
	I1212 19:30:52.610958  140968 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-347541"
	I1212 19:30:52.610993  140968 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.166377678s)
	I1212 19:30:52.611020  140968 api_server.go:72] duration metric: took 10.572555968s to wait for apiserver process to appear ...
	I1212 19:30:52.611158  140968 api_server.go:88] waiting for apiserver healthz status ...
	I1212 19:30:52.611212  140968 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1212 19:30:52.612369  140968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 19:30:52.613118  140968 out.go:179] * Verifying csi-hostpath-driver addon...
	I1212 19:30:52.614400  140968 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1212 19:30:52.615143  140968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 19:30:52.615713  140968 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 19:30:52.615728  140968 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 19:30:52.646195  140968 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 19:30:52.646216  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:52.646765  140968 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1212 19:30:52.650199  140968 api_server.go:141] control plane version: v1.34.2
	I1212 19:30:52.650229  140968 api_server.go:131] duration metric: took 39.061885ms to wait for apiserver health ...
	I1212 19:30:52.650259  140968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 19:30:52.684054  140968 system_pods.go:59] 20 kube-system pods found
	I1212 19:30:52.684092  140968 system_pods.go:61] "amd-gpu-device-plugin-2xl4r" [ede87043-19cb-485d-8eb9-d84d809cdc54] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:30:52.684100  140968 system_pods.go:61] "coredns-66bc5c9577-vvxxj" [5d9292f5-1548-47ef-a76a-f488221712e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:30:52.684125  140968 system_pods.go:61] "coredns-66bc5c9577-zf7x7" [193b24c3-32e5-4ca1-bebb-0a249a6a436e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:30:52.684134  140968 system_pods.go:61] "csi-hostpath-attacher-0" [7500a8ca-2ffc-4d75-ae8c-e49175987633] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:30:52.684138  140968 system_pods.go:61] "csi-hostpath-resizer-0" [3b07e95c-174c-43a1-b28e-d07f71af1028] Pending
	I1212 19:30:52.684145  140968 system_pods.go:61] "csi-hostpathplugin-mkfcn" [fe53d3cb-3e18-4853-9fdd-2c0f5b822937] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:30:52.684148  140968 system_pods.go:61] "etcd-addons-347541" [cbb27be6-7d31-4137-9e8e-81f5778d9889] Running
	I1212 19:30:52.684153  140968 system_pods.go:61] "kube-apiserver-addons-347541" [65becc66-812b-4417-8600-67b7408d63e8] Running
	I1212 19:30:52.684157  140968 system_pods.go:61] "kube-controller-manager-addons-347541" [f2af5f9f-d9b9-4469-b80f-08bfe2e19358] Running
	I1212 19:30:52.684162  140968 system_pods.go:61] "kube-ingress-dns-minikube" [2b04ee36-5eba-4b96-995d-1a77e2ddb46b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:30:52.684165  140968 system_pods.go:61] "kube-proxy-x5bxp" [1efedaf7-228f-4318-bd8c-a85d80dd0b77] Running
	I1212 19:30:52.684169  140968 system_pods.go:61] "kube-scheduler-addons-347541" [b7da366a-1bbf-480f-8187-28545db9ed0a] Running
	I1212 19:30:52.684173  140968 system_pods.go:61] "metrics-server-85b7d694d7-tmr5k" [5dd23de9-3bea-45d2-b80b-4b966bf80193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:30:52.684179  140968 system_pods.go:61] "nvidia-device-plugin-daemonset-s9zn5" [9049612d-22d5-42ee-a561-b6acda7ef4e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:30:52.684185  140968 system_pods.go:61] "registry-6b586f9694-5td7r" [201134be-c27b-4ed0-83ec-71d107dac0c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:30:52.684190  140968 system_pods.go:61] "registry-creds-764b6fb674-2lqlc" [8bde3033-d2e9-4aa8-85ec-6849a565941b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:30:52.684194  140968 system_pods.go:61] "registry-proxy-gxsjd" [0943e635-926e-40e1-9444-adcc285ac289] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:30:52.684200  140968 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4kgt2" [073ae593-9fae-4668-912c-99370421b081] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:52.684210  140968 system_pods.go:61] "snapshot-controller-7d9fbc56b8-krfxw" [77651017-de3a-4f06-851e-1650fb810697] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:52.684213  140968 system_pods.go:61] "storage-provisioner" [1f852b24-b5fe-4b85-8007-74282a8e3746] Running
	I1212 19:30:52.684220  140968 system_pods.go:74] duration metric: took 33.955869ms to wait for pod list to return data ...
	I1212 19:30:52.684229  140968 default_sa.go:34] waiting for default service account to be created ...
	I1212 19:30:52.701443  140968 default_sa.go:45] found service account: "default"
	I1212 19:30:52.701469  140968 default_sa.go:55] duration metric: took 17.235107ms for default service account to be created ...
	I1212 19:30:52.701480  140968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 19:30:52.741834  140968 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 19:30:52.741868  140968 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 19:30:52.770410  140968 system_pods.go:86] 20 kube-system pods found
	I1212 19:30:52.770476  140968 system_pods.go:89] "amd-gpu-device-plugin-2xl4r" [ede87043-19cb-485d-8eb9-d84d809cdc54] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1212 19:30:52.770493  140968 system_pods.go:89] "coredns-66bc5c9577-vvxxj" [5d9292f5-1548-47ef-a76a-f488221712e1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:30:52.770509  140968 system_pods.go:89] "coredns-66bc5c9577-zf7x7" [193b24c3-32e5-4ca1-bebb-0a249a6a436e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 19:30:52.770520  140968 system_pods.go:89] "csi-hostpath-attacher-0" [7500a8ca-2ffc-4d75-ae8c-e49175987633] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 19:30:52.770529  140968 system_pods.go:89] "csi-hostpath-resizer-0" [3b07e95c-174c-43a1-b28e-d07f71af1028] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 19:30:52.770544  140968 system_pods.go:89] "csi-hostpathplugin-mkfcn" [fe53d3cb-3e18-4853-9fdd-2c0f5b822937] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 19:30:52.770550  140968 system_pods.go:89] "etcd-addons-347541" [cbb27be6-7d31-4137-9e8e-81f5778d9889] Running
	I1212 19:30:52.770557  140968 system_pods.go:89] "kube-apiserver-addons-347541" [65becc66-812b-4417-8600-67b7408d63e8] Running
	I1212 19:30:52.770564  140968 system_pods.go:89] "kube-controller-manager-addons-347541" [f2af5f9f-d9b9-4469-b80f-08bfe2e19358] Running
	I1212 19:30:52.770573  140968 system_pods.go:89] "kube-ingress-dns-minikube" [2b04ee36-5eba-4b96-995d-1a77e2ddb46b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 19:30:52.770580  140968 system_pods.go:89] "kube-proxy-x5bxp" [1efedaf7-228f-4318-bd8c-a85d80dd0b77] Running
	I1212 19:30:52.770586  140968 system_pods.go:89] "kube-scheduler-addons-347541" [b7da366a-1bbf-480f-8187-28545db9ed0a] Running
	I1212 19:30:52.770606  140968 system_pods.go:89] "metrics-server-85b7d694d7-tmr5k" [5dd23de9-3bea-45d2-b80b-4b966bf80193] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 19:30:52.770625  140968 system_pods.go:89] "nvidia-device-plugin-daemonset-s9zn5" [9049612d-22d5-42ee-a561-b6acda7ef4e9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 19:30:52.770634  140968 system_pods.go:89] "registry-6b586f9694-5td7r" [201134be-c27b-4ed0-83ec-71d107dac0c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 19:30:52.770644  140968 system_pods.go:89] "registry-creds-764b6fb674-2lqlc" [8bde3033-d2e9-4aa8-85ec-6849a565941b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 19:30:52.770652  140968 system_pods.go:89] "registry-proxy-gxsjd" [0943e635-926e-40e1-9444-adcc285ac289] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 19:30:52.770661  140968 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kgt2" [073ae593-9fae-4668-912c-99370421b081] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:52.770672  140968 system_pods.go:89] "snapshot-controller-7d9fbc56b8-krfxw" [77651017-de3a-4f06-851e-1650fb810697] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 19:30:52.770684  140968 system_pods.go:89] "storage-provisioner" [1f852b24-b5fe-4b85-8007-74282a8e3746] Running
	I1212 19:30:52.770699  140968 system_pods.go:126] duration metric: took 69.208924ms to wait for k8s-apps to be running ...
	I1212 19:30:52.770714  140968 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 19:30:52.770801  140968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:30:52.806613  140968 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 19:30:52.806646  140968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1212 19:30:52.903397  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:52.904197  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:52.921662  140968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 19:30:53.122733  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:53.402024  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:53.404763  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:53.621194  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:53.639676  140968 system_svc.go:56] duration metric: took 868.95016ms WaitForService to wait for kubelet
	I1212 19:30:53.639723  140968 kubeadm.go:587] duration metric: took 11.601255184s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 19:30:53.639782  140968 node_conditions.go:102] verifying NodePressure condition ...
	I1212 19:30:53.639678  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.983084615s)
	I1212 19:30:53.656218  140968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 19:30:53.656253  140968 node_conditions.go:123] node cpu capacity is 2
	I1212 19:30:53.656307  140968 node_conditions.go:105] duration metric: took 16.509424ms to run NodePressure ...
	I1212 19:30:53.656324  140968 start.go:242] waiting for startup goroutines ...
	I1212 19:30:53.906953  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:53.907946  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:54.041289  140968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.119580351s)
	I1212 19:30:54.042533  140968 addons.go:495] Verifying addon gcp-auth=true in "addons-347541"
	I1212 19:30:54.044707  140968 out.go:179] * Verifying gcp-auth addon...
	I1212 19:30:54.046392  140968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 19:30:54.054336  140968 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 19:30:54.054374  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:54.119123  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:54.402755  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:54.404239  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:54.550691  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:54.621686  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:54.901411  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:54.902091  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:55.051241  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:55.120906  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:55.399501  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:55.403295  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:55.553138  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:55.653748  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:55.900150  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:55.901278  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:56.051717  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:56.121075  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:56.401913  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:56.402238  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:56.550251  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:56.620874  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:56.901320  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:56.901330  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:57.054097  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:57.156829  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:57.398857  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:57.399202  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:57.550157  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:57.619218  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:57.898296  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:57.898680  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:58.049911  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:58.119202  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:58.399133  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:58.399342  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:58.550352  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:58.618961  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:58.898354  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:58.898469  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:59.050452  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:59.121928  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:59.400505  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:59.400740  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:30:59.549628  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:30:59.621077  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:30:59.898822  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:30:59.899026  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:00.053082  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:00.154285  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:00.398298  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:00.399316  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:00.550630  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:00.619457  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:00.898321  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:00.898348  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:01.078462  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:01.125059  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:01.400696  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:01.402422  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:01.550220  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:01.618540  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:01.901675  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:01.902037  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:02.050678  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:02.119657  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:02.397449  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:02.397732  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:02.550518  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:02.620063  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:02.900899  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:02.901591  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:03.050761  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:03.118706  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:03.400305  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:03.401646  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:03.551215  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:03.619673  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:03.902052  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:03.904186  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:04.054616  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:04.118764  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:04.400903  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:04.402989  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:04.550063  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:04.620004  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:04.900396  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:04.900852  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:05.049917  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:05.122124  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:05.507475  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:05.507970  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:05.601388  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:05.618171  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:05.899716  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:05.904000  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:06.050422  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:06.119722  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:06.398903  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:06.399026  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:06.550481  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:06.619348  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:06.898291  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:06.898578  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:07.051570  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:07.153083  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:07.398542  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:07.399038  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:07.550674  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:07.619042  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:07.899297  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:07.899533  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:08.049859  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:08.121265  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:08.397754  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:08.397785  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:08.550031  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:08.620204  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:08.900325  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:08.901564  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:09.051310  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:09.126166  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:09.401057  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:09.401266  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:09.651025  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:09.651771  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:09.900963  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:09.903133  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:10.053312  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:10.155561  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:10.398960  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:10.399675  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:10.550157  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:10.620078  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:10.897413  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:10.897425  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:11.059270  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:11.118889  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:11.398676  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:11.398864  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:11.549956  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:11.619289  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:11.897902  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:11.898179  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:12.050363  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:12.119308  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:12.398594  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:12.399222  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:12.551150  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:12.624128  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:12.900795  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:12.900941  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:13.050473  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:13.120663  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:13.400690  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:13.401090  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:13.550183  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:13.620684  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:13.902954  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:13.904260  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:14.050132  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:14.121477  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:14.400772  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:14.401965  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:14.549941  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:14.620311  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:14.901437  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:14.901996  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:15.052104  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:15.120888  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:15.399601  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:15.400307  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:15.552178  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:15.621581  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:15.900641  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:15.901172  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:16.078158  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:16.120257  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:16.403710  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:16.405449  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:16.549130  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:16.619606  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:16.899673  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:16.899878  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:17.050254  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:17.119061  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:17.399384  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:17.404838  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:17.551889  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:17.620314  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:17.978047  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:17.980576  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:18.050688  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:18.120316  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:18.399818  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:18.400409  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:18.550185  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:18.627811  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:18.899944  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:18.902256  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:19.050665  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:19.118731  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:19.403003  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:19.403169  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:19.550972  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:19.620101  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:19.899161  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:19.899217  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:20.051791  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:20.119730  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:20.400830  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:20.400829  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:20.550068  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:20.619993  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:20.897580  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:20.897802  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:21.049907  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:21.119602  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:21.398008  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:21.398012  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:21.550644  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:21.618977  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:21.898493  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:21.899061  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:22.055944  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:22.122547  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:22.399184  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:22.401191  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:22.552566  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:22.622209  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:22.899983  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:22.900890  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:23.051011  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:23.119925  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:23.398228  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:23.398845  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:23.550193  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:23.620131  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:23.898540  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:23.898678  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:24.077827  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:24.121848  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:24.399847  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:24.399948  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:24.553432  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:24.622004  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:24.899535  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:24.899830  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:25.051067  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:25.119449  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:25.403832  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:25.403958  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 19:31:25.554792  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:25.618938  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:25.900264  140968 kapi.go:107] duration metric: took 34.505937199s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 19:31:25.900411  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:26.051457  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:26.153902  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:26.400498  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:26.549924  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:26.620491  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:26.897710  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:27.051694  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:27.152649  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:27.400482  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:27.550182  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:27.651054  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:27.898350  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:28.050588  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:28.119391  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:28.397542  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:28.561580  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:28.623576  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:28.899256  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:29.053908  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:29.120206  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:29.399043  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:29.553181  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:29.621184  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:29.898391  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:30.051184  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:30.121523  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:30.399297  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:30.552579  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:30.620386  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:30.899760  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:31.079925  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:31.120592  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:31.397370  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:31.551277  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:31.619902  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:31.899483  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:32.050052  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:32.121228  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:32.397600  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:32.552043  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:32.619982  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:32.898363  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:33.051325  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:33.118373  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:33.397523  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:33.549562  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:33.619154  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:33.898372  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:34.049238  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:34.119119  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:34.399251  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:34.551870  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:34.628126  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:34.901043  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:35.053544  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:35.118924  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:35.397959  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:35.550802  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:35.620394  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:35.900359  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:36.054065  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:36.121752  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:36.398748  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:36.550521  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:36.621151  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:36.897678  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:37.064198  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:37.121209  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:37.397610  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:37.553846  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:37.619993  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:37.927783  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:38.053007  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:38.153285  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:38.397619  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:38.550502  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:38.652053  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:38.898472  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:39.050180  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:39.119198  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:39.397206  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:39.551253  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:39.618818  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:39.898829  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:40.049831  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:40.118880  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:40.401449  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:40.551225  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:40.619794  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:40.901873  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:41.050034  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:41.123689  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:41.398834  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:41.550242  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:41.618371  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:41.900032  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:42.050501  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:42.120243  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:42.403296  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:42.550702  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:42.621778  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:42.898133  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:43.051781  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:43.118545  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:43.398318  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:43.552749  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:43.622169  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:43.899465  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:44.053725  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:44.119808  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:44.399007  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:44.556427  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:44.624674  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:44.903667  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:45.050903  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:45.120297  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:45.399154  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:45.552668  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:45.618326  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:45.900435  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:46.048936  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:46.124074  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:46.404824  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:46.550820  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:46.619100  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:46.898752  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:47.049672  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:47.123633  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:47.398937  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:47.549944  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:47.619732  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:47.898003  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:48.057437  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:48.157688  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:48.399498  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:48.550093  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:48.619493  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:48.897818  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:49.050342  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:49.118570  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:49.398985  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:49.692447  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:49.693089  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:49.902721  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:50.053347  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:50.120416  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:50.398441  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:50.552841  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:50.619664  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:50.899356  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:51.050944  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:51.129581  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:51.397936  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:51.550617  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:51.620135  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:51.899129  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:52.054532  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:52.121634  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:52.399718  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:52.551029  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:52.623633  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:52.926041  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:53.052799  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:53.153297  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:53.398085  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:53.549891  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:53.621587  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:53.899754  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:54.051985  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:54.153768  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:54.398184  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:54.551768  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:54.619345  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:54.900008  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:55.050400  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:55.119456  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:55.399661  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:55.553589  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:55.620189  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:55.898597  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:56.049442  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:56.121737  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:56.399445  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:56.549388  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:56.618810  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:56.898602  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:57.051214  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:57.120020  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:57.399673  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:57.551795  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:57.619238  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:57.901077  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:58.074298  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:58.136957  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:58.401587  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:58.550946  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:58.620900  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:58.900444  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:59.054402  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:59.122242  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:59.398219  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:31:59.555651  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:31:59.622353  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:31:59.897534  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:32:00.055385  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:00.121790  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:00.399186  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:32:00.552977  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:00.619695  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:00.897986  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:32:01.051739  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:01.120317  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:01.398673  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:32:01.550442  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:01.621105  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:01.898352  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:32:02.052148  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:02.121278  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:02.399049  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:32:02.706042  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:02.707339  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:02.901684  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:32:03.051382  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:03.120836  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:03.398769  140968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 19:32:03.550741  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:03.618995  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:03.898979  140968 kapi.go:107] duration metric: took 1m12.504725695s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 19:32:04.052014  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:04.121228  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:04.552556  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:04.620192  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:05.050316  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:05.118849  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:05.550083  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:05.619382  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 19:32:06.050213  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:06.119620  140968 kapi.go:107] duration metric: took 1m13.504471387s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 19:32:06.551144  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:07.050465  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:07.550586  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:08.054688  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:08.553261  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:09.051320  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:09.552445  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:10.052347  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:10.550608  140968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 19:32:11.050544  140968 kapi.go:107] duration metric: took 1m17.004149978s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 19:32:11.052025  140968 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-347541 cluster.
	I1212 19:32:11.053102  140968 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 19:32:11.054183  140968 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 19:32:11.055308  140968 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner, cloud-spanner, inspektor-gadget, amd-gpu-device-plugin, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1212 19:32:11.056352  140968 addons.go:530] duration metric: took 1m29.017892967s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner cloud-spanner inspektor-gadget amd-gpu-device-plugin registry-creds metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1212 19:32:11.056390  140968 start.go:247] waiting for cluster config update ...
	I1212 19:32:11.056407  140968 start.go:256] writing updated cluster config ...
	I1212 19:32:11.056664  140968 ssh_runner.go:195] Run: rm -f paused
	I1212 19:32:11.062267  140968 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 19:32:11.065991  140968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vvxxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:11.070583  140968 pod_ready.go:94] pod "coredns-66bc5c9577-vvxxj" is "Ready"
	I1212 19:32:11.070603  140968 pod_ready.go:86] duration metric: took 4.589865ms for pod "coredns-66bc5c9577-vvxxj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:11.072585  140968 pod_ready.go:83] waiting for pod "etcd-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:11.076835  140968 pod_ready.go:94] pod "etcd-addons-347541" is "Ready"
	I1212 19:32:11.076853  140968 pod_ready.go:86] duration metric: took 4.250439ms for pod "etcd-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:11.078993  140968 pod_ready.go:83] waiting for pod "kube-apiserver-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:11.085172  140968 pod_ready.go:94] pod "kube-apiserver-addons-347541" is "Ready"
	I1212 19:32:11.085190  140968 pod_ready.go:86] duration metric: took 6.180955ms for pod "kube-apiserver-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:11.087903  140968 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:11.466780  140968 pod_ready.go:94] pod "kube-controller-manager-addons-347541" is "Ready"
	I1212 19:32:11.466810  140968 pod_ready.go:86] duration metric: took 378.889786ms for pod "kube-controller-manager-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:11.666238  140968 pod_ready.go:83] waiting for pod "kube-proxy-x5bxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:12.066028  140968 pod_ready.go:94] pod "kube-proxy-x5bxp" is "Ready"
	I1212 19:32:12.066058  140968 pod_ready.go:86] duration metric: took 399.793535ms for pod "kube-proxy-x5bxp" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:12.266206  140968 pod_ready.go:83] waiting for pod "kube-scheduler-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:12.667224  140968 pod_ready.go:94] pod "kube-scheduler-addons-347541" is "Ready"
	I1212 19:32:12.667253  140968 pod_ready.go:86] duration metric: took 401.02482ms for pod "kube-scheduler-addons-347541" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 19:32:12.667265  140968 pod_ready.go:40] duration metric: took 1.604968059s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 19:32:12.713555  140968 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 19:32:12.716172  140968 out.go:179] * Done! kubectl is now configured to use "addons-347541" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.751524705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a73781e60a95e8b2a43149459448fd7c7e33dc7082e8230749f5479db18a37e,PodSandboxId:d1bc8541b182df3000fab9ab6672740aafcf7a7b30783934ee5699a6cd87946c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765567979389597466,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28bd2e4c-a606-45ae-bff8-93cc740702b2,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e904e6080a43f0c054c3ca11aacc514a784efd56578147be73d316fdc7363,PodSandboxId:003c1dd6f275a26397bde10bac721f2c972d32f9770501bdbc84ca1ebe403c43,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765567937896034608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86482a73-fed6-4ee2-93dd-8079de7542f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec93bc4394f3154447edaf2dccb4600daaf9b0499f3d4333a07e22e7d58673c,PodSandboxId:a46ac965878cf313218d8cc7223a8bdd5bb30542c5f584bc36d2861d6fb1f31e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765567922853623140,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-hppl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b5c91af-8fc7-4d47-875e-d78a54b2c59f,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5728418319c3823fde8ec0d5902908261a293f2719f68770fcf113e98bdce493,PodSandboxId:e5663746f2894d5e7c986087132b0c4043e181abee9a49a34a06356e29fe8c44,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765567904755015404,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twfg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f2f4060-276e-41bb-bed9-c734bf5967ce,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b90859836907690570ed8f4f5cbc16fb5d0b64660f4f1bae895049c4c8514d,PodSandboxId:f6b67a56ec25ed99a6254ee2918988d2702c5d588a8c4e96230d61c5cf974c24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765567904553571552,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 414c9a1f-9f9e-44b4-be77-987eadd5f18c,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2171eb5eb101a13d23803b5a8ebf0a86a8d17b45e7acbca8b43ba049f1b7512,PodSandboxId:a0471b2e05c0ab96a47f67c6fb7fdb8ee11a2762c72e3f21b7239a8255324897,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765567901140077918,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-4tdnr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c246219c-ebf0-4567-bacc-ed288d17a0e1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cb209f08af1036fc97325bb4faaca22221448f614c69562527ca5dd4a9b13b,PodSandboxId:2dc32e7104f9974f471e97b0147337c2171986526e4fea85e43675d7f3ae83b2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765567879411513042,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04ee36-5eba-4b96-995d-1a77e2ddb46b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f0e7375564e7015b2d15f50b51e3fb436315a9f6c364ec02fdd5c59190723c,PodSandboxId:f33474ff319cb99f546ffff3e938a58d9cff6269b69eda57c424b42ae86e876e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765567859984510571,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2xl4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede87043-19cb-485d-8eb9-d84d809cdc54,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83,PodSandboxId:c9804addfe1ef68626ad31a2a5dddaca92997464acd44f1013af73e997e42e5d,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765567849703328907,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f852b24-b5fe-4b85-8007-74282a8e3746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28,PodSandboxId:e494954752c0ff47cccd80ea99aba276d5dcfc49147e30b0eaba4630ea02883b,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765567843647449435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vvxxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9292f5-1548-47ef-a76a-f488221712e1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0,PodSandboxId:46b7fe6eaddbb5a0738092f9932330558dd699058a0c1ecbdd81313662caae5e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765567842961644101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x5bxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1efedaf7-228f-4318-bd8c-a85d80dd0b77,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0,PodSandboxId:1c7dc9dc3acc41ce092c7cbaf82d78103ff5f0cb1e52591555183d9316bf9980,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765567830144455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce3b7a702fbef9a6121b414f85545a0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3,PodSandboxId:5c3481bbc735f57e8e99bfc3693ca4746ed92c3acf843246ef44a7239802eeaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765567830115776659,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098927010764c91c96aa66fd9ba6efc,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5,PodSandboxId:f0a593f81c8ac21d6b183fedd115a17bc50b72d69bf90826e37a050976ccbdee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765567830099509594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40eda06
a75df0cd69d41a597946a693,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5,PodSandboxId:61e9bb926854de445e9d469dbc8eb0bf4a1494fd0965e66adf02270a40446bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765567830089573242,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a47e93b8e8cc0ae4fe59ac4b3e6151,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfb8f0e2-dec2-4ff9-a0a1-688ee158fc77 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.761930170Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=cbd822b4-636b-489a-a0f4-1a102568cbcd name=/runtime.v1.RuntimeService/Version
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.761996140Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbd822b4-636b-489a-a0f4-1a102568cbcd name=/runtime.v1.RuntimeService/Version
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.785710262Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5d90005-bc37-484a-b557-3d492b235c54 name=/runtime.v1.RuntimeService/Version
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.785808342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5d90005-bc37-484a-b557-3d492b235c54 name=/runtime.v1.RuntimeService/Version
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.787157454Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ae4e5d5-f6cd-42e6-b809-5a99b3a49346 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.788402367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765568122788347717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ae4e5d5-f6cd-42e6-b809-5a99b3a49346 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.789235021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb8295ab-4d9e-46ff-9f02-d3cfaa5476f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.789583270Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb8295ab-4d9e-46ff-9f02-d3cfaa5476f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.790088512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a73781e60a95e8b2a43149459448fd7c7e33dc7082e8230749f5479db18a37e,PodSandboxId:d1bc8541b182df3000fab9ab6672740aafcf7a7b30783934ee5699a6cd87946c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765567979389597466,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28bd2e4c-a606-45ae-bff8-93cc740702b2,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e904e6080a43f0c054c3ca11aacc514a784efd56578147be73d316fdc7363,PodSandboxId:003c1dd6f275a26397bde10bac721f2c972d32f9770501bdbc84ca1ebe403c43,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765567937896034608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86482a73-fed6-4ee2-93dd-8079de7542f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec93bc4394f3154447edaf2dccb4600daaf9b0499f3d4333a07e22e7d58673c,PodSandboxId:a46ac965878cf313218d8cc7223a8bdd5bb30542c5f584bc36d2861d6fb1f31e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765567922853623140,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-hppl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b5c91af-8fc7-4d47-875e-d78a54b2c59f,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5728418319c3823fde8ec0d5902908261a293f2719f68770fcf113e98bdce493,PodSandboxId:e5663746f2894d5e7c986087132b0c4043e181abee9a49a34a06356e29fe8c44,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765567904755015404,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twfg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f2f4060-276e-41bb-bed9-c734bf5967ce,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b90859836907690570ed8f4f5cbc16fb5d0b64660f4f1bae895049c4c8514d,PodSandboxId:f6b67a56ec25ed99a6254ee2918988d2702c5d588a8c4e96230d61c5cf974c24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765567904553571552,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 414c9a1f-9f9e-44b4-be77-987eadd5f18c,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2171eb5eb101a13d23803b5a8ebf0a86a8d17b45e7acbca8b43ba049f1b7512,PodSandboxId:a0471b2e05c0ab96a47f67c6fb7fdb8ee11a2762c72e3f21b7239a8255324897,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765567901140077918,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-4tdnr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c246219c-ebf0-4567-bacc-ed288d17a0e1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cb209f08af1036fc97325bb4faaca22221448f614c69562527ca5dd4a9b13b,PodSandboxId:2dc32e7104f9974f471e97b0147337c2171986526e4fea85e43675d7f3ae83b2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765567879411513042,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04ee36-5eba-4b96-995d-1a77e2ddb46b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f0e7375564e7015b2d15f50b51e3fb436315a9f6c364ec02fdd5c59190723c,PodSandboxId:f33474ff319cb99f546ffff3e938a58d9cff6269b69eda57c424b42ae86e876e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765567859984510571,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2xl4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede87043-19cb-485d-8eb9-d84d809cdc54,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83,PodSandboxId:c9804addfe1ef68626ad31a2a5dddaca92997464acd44f1013af73e997e42e5d,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765567849703328907,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f852b24-b5fe-4b85-8007-74282a8e3746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28,PodSandboxId:e494954752c0ff47cccd80ea99aba276d5dcfc49147e30b0eaba4630ea02883b,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765567843647449435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vvxxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9292f5-1548-47ef-a76a-f488221712e1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0,PodSandboxId:46b7fe6eaddbb5a0738092f9932330558dd699058a0c1ecbdd81313662caae5e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765567842961644101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x5bxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1efedaf7-228f-4318-bd8c-a85d80dd0b77,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0,PodSandboxId:1c7dc9dc3acc41ce092c7cbaf82d78103ff5f0cb1e52591555183d9316bf9980,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765567830144455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce3b7a702fbef9a6121b414f85545a0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3,PodSandboxId:5c3481bbc735f57e8e99bfc3693ca4746ed92c3acf843246ef44a7239802eeaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765567830115776659,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098927010764c91c96aa66fd9ba6efc,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5,PodSandboxId:f0a593f81c8ac21d6b183fedd115a17bc50b72d69bf90826e37a050976ccbdee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765567830099509594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40eda06
a75df0cd69d41a597946a693,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5,PodSandboxId:61e9bb926854de445e9d469dbc8eb0bf4a1494fd0965e66adf02270a40446bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765567830089573242,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a47e93b8e8cc0ae4fe59ac4b3e6151,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb8295ab-4d9e-46ff-9f02-d3cfaa5476f3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.813582526Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.820853827Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8036143c-d577-4c97-b63e-60f51cf9be82 name=/runtime.v1.RuntimeService/Version
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.821017639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8036143c-d577-4c97-b63e-60f51cf9be82 name=/runtime.v1.RuntimeService/Version
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.822545089Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f1720d2-6cf9-4e8d-9686-5edff196c0ce name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.823780109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765568122823750512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f1720d2-6cf9-4e8d-9686-5edff196c0ce name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.824793820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77cbf8e6-dbdf-43d5-ad7f-ea10209f6870 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.824897394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77cbf8e6-dbdf-43d5-ad7f-ea10209f6870 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.825194661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a73781e60a95e8b2a43149459448fd7c7e33dc7082e8230749f5479db18a37e,PodSandboxId:d1bc8541b182df3000fab9ab6672740aafcf7a7b30783934ee5699a6cd87946c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765567979389597466,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28bd2e4c-a606-45ae-bff8-93cc740702b2,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e904e6080a43f0c054c3ca11aacc514a784efd56578147be73d316fdc7363,PodSandboxId:003c1dd6f275a26397bde10bac721f2c972d32f9770501bdbc84ca1ebe403c43,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765567937896034608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86482a73-fed6-4ee2-93dd-8079de7542f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec93bc4394f3154447edaf2dccb4600daaf9b0499f3d4333a07e22e7d58673c,PodSandboxId:a46ac965878cf313218d8cc7223a8bdd5bb30542c5f584bc36d2861d6fb1f31e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765567922853623140,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-hppl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b5c91af-8fc7-4d47-875e-d78a54b2c59f,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5728418319c3823fde8ec0d5902908261a293f2719f68770fcf113e98bdce493,PodSandboxId:e5663746f2894d5e7c986087132b0c4043e181abee9a49a34a06356e29fe8c44,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765567904755015404,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twfg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f2f4060-276e-41bb-bed9-c734bf5967ce,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b90859836907690570ed8f4f5cbc16fb5d0b64660f4f1bae895049c4c8514d,PodSandboxId:f6b67a56ec25ed99a6254ee2918988d2702c5d588a8c4e96230d61c5cf974c24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765567904553571552,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 414c9a1f-9f9e-44b4-be77-987eadd5f18c,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2171eb5eb101a13d23803b5a8ebf0a86a8d17b45e7acbca8b43ba049f1b7512,PodSandboxId:a0471b2e05c0ab96a47f67c6fb7fdb8ee11a2762c72e3f21b7239a8255324897,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765567901140077918,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-4tdnr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c246219c-ebf0-4567-bacc-ed288d17a0e1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cb209f08af1036fc97325bb4faaca22221448f614c69562527ca5dd4a9b13b,PodSandboxId:2dc32e7104f9974f471e97b0147337c2171986526e4fea85e43675d7f3ae83b2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765567879411513042,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04ee36-5eba-4b96-995d-1a77e2ddb46b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f0e7375564e7015b2d15f50b51e3fb436315a9f6c364ec02fdd5c59190723c,PodSandboxId:f33474ff319cb99f546ffff3e938a58d9cff6269b69eda57c424b42ae86e876e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765567859984510571,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2xl4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede87043-19cb-485d-8eb9-d84d809cdc54,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83,PodSandboxId:c9804addfe1ef68626ad31a2a5dddaca92997464acd44f1013af73e997e42e5d,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765567849703328907,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f852b24-b5fe-4b85-8007-74282a8e3746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28,PodSandboxId:e494954752c0ff47cccd80ea99aba276d5dcfc49147e30b0eaba4630ea02883b,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765567843647449435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vvxxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9292f5-1548-47ef-a76a-f488221712e1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0,PodSandboxId:46b7fe6eaddbb5a0738092f9932330558dd699058a0c1ecbdd81313662caae5e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765567842961644101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x5bxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1efedaf7-228f-4318-bd8c-a85d80dd0b77,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0,PodSandboxId:1c7dc9dc3acc41ce092c7cbaf82d78103ff5f0cb1e52591555183d9316bf9980,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765567830144455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce3b7a702fbef9a6121b414f85545a0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3,PodSandboxId:5c3481bbc735f57e8e99bfc3693ca4746ed92c3acf843246ef44a7239802eeaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765567830115776659,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098927010764c91c96aa66fd9ba6efc,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5,PodSandboxId:f0a593f81c8ac21d6b183fedd115a17bc50b72d69bf90826e37a050976ccbdee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765567830099509594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40eda06
a75df0cd69d41a597946a693,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5,PodSandboxId:61e9bb926854de445e9d469dbc8eb0bf4a1494fd0965e66adf02270a40446bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765567830089573242,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a47e93b8e8cc0ae4fe59ac4b3e6151,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77cbf8e6-dbdf-43d5-ad7f-ea10209f6870 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.855153111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd23bce0-a5c0-454a-8cd3-21894f402bf6 name=/runtime.v1.RuntimeService/Version
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.855296188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd23bce0-a5c0-454a-8cd3-21894f402bf6 name=/runtime.v1.RuntimeService/Version
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.857978061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5927c9fc-d0be-4b19-9267-508394eeecb9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.860693308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765568122860661173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5927c9fc-d0be-4b19-9267-508394eeecb9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.863293776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbdec8a7-2bfc-46eb-a06c-5c65d509a0a9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.863392969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbdec8a7-2bfc-46eb-a06c-5c65d509a0a9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 19:35:22 addons-347541 crio[812]: time="2025-12-12 19:35:22.863993429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3a73781e60a95e8b2a43149459448fd7c7e33dc7082e8230749f5479db18a37e,PodSandboxId:d1bc8541b182df3000fab9ab6672740aafcf7a7b30783934ee5699a6cd87946c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765567979389597466,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 28bd2e4c-a606-45ae-bff8-93cc740702b2,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e57e904e6080a43f0c054c3ca11aacc514a784efd56578147be73d316fdc7363,PodSandboxId:003c1dd6f275a26397bde10bac721f2c972d32f9770501bdbc84ca1ebe403c43,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765567937896034608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86482a73-fed6-4ee2-93dd-8079de7542f0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec93bc4394f3154447edaf2dccb4600daaf9b0499f3d4333a07e22e7d58673c,PodSandboxId:a46ac965878cf313218d8cc7223a8bdd5bb30542c5f584bc36d2861d6fb1f31e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765567922853623140,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-hppl2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7b5c91af-8fc7-4d47-875e-d78a54b2c59f,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5728418319c3823fde8ec0d5902908261a293f2719f68770fcf113e98bdce493,PodSandboxId:e5663746f2894d5e7c986087132b0c4043e181abee9a49a34a06356e29fe8c44,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765567904755015404,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-twfg2,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8f2f4060-276e-41bb-bed9-c734bf5967ce,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27b90859836907690570ed8f4f5cbc16fb5d0b64660f4f1bae895049c4c8514d,PodSandboxId:f6b67a56ec25ed99a6254ee2918988d2702c5d588a8c4e96230d61c5cf974c24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765567904553571552,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-pdz68,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 414c9a1f-9f9e-44b4-be77-987eadd5f18c,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2171eb5eb101a13d23803b5a8ebf0a86a8d17b45e7acbca8b43ba049f1b7512,PodSandboxId:a0471b2e05c0ab96a47f67c6fb7fdb8ee11a2762c72e3f21b7239a8255324897,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765567901140077918,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-4tdnr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c246219c-ebf0-4567-bacc-ed288d17a0e1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29cb209f08af1036fc97325bb4faaca22221448f614c69562527ca5dd4a9b13b,PodSandboxId:2dc32e7104f9974f471e97b0147337c2171986526e4fea85e43675d7f3ae83b2,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765567879411513042,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b04ee36-5eba-4b96-995d-1a77e2ddb46b,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23f0e7375564e7015b2d15f50b51e3fb436315a9f6c364ec02fdd5c59190723c,PodSandboxId:f33474ff319cb99f546ffff3e938a58d9cff6269b69eda57c424b42ae86e876e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765567859984510571,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-2xl4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ede87043-19cb-485d-8eb9-d84d809cdc54,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83,PodSandboxId:c9804addfe1ef68626ad31a2a5dddaca92997464acd44f1013af73e997e42e5d,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765567849703328907,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f852b24-b5fe-4b85-8007-74282a8e3746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28,PodSandboxId:e494954752c0ff47cccd80ea99aba276d5dcfc49147e30b0eaba4630ea02883b,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765567843647449435,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vvxxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d9292f5-1548-47ef-a76a-f488221712e1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0,PodSandboxId:46b7fe6eaddbb5a0738092f9932330558dd699058a0c1ecbdd81313662caae5e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765567842961644101,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x5bxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1efedaf7-228f-4318-bd8c-a85d80dd0b77,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0,PodSandboxId:1c7dc9dc3acc41ce092c7cbaf82d78103ff5f0cb1e52591555183d9316bf9980,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765567830144455928,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce3b7a702fbef9a6121b414f85545a0,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3,PodSandboxId:5c3481bbc735f57e8e99bfc3693ca4746ed92c3acf843246ef44a7239802eeaf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765567830115776659,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e098927010764c91c96aa66fd9ba6efc,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5,PodSandboxId:f0a593f81c8ac21d6b183fedd115a17bc50b72d69bf90826e37a050976ccbdee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765567830099509594,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a40eda06
a75df0cd69d41a597946a693,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5,PodSandboxId:61e9bb926854de445e9d469dbc8eb0bf4a1494fd0965e66adf02270a40446bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765567830089573242,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-347541,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31a47e93b8e8cc0ae4fe59ac4b3e6151,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbdec8a7-2bfc-46eb-a06c-5c65d509a0a9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3a73781e60a95       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   d1bc8541b182d       nginx                                       default
	e57e904e6080a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   003c1dd6f275a       busybox                                     default
	9ec93bc4394f3       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   a46ac965878cf       ingress-nginx-controller-85d4c799dd-hppl2   ingress-nginx
	5728418319c38       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                             3 minutes ago       Exited              patch                     1                   e5663746f2894       ingress-nginx-admission-patch-twfg2         ingress-nginx
	27b9085983690       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   f6b67a56ec25e       ingress-nginx-admission-create-pdz68        ingress-nginx
	b2171eb5eb101       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   a0471b2e05c0a       local-path-provisioner-648f6765c9-4tdnr     local-path-storage
	29cb209f08af1       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   2dc32e7104f99       kube-ingress-dns-minikube                   kube-system
	23f0e7375564e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   f33474ff319cb       amd-gpu-device-plugin-2xl4r                 kube-system
	aa121a6614b9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   c9804addfe1ef       storage-provisioner                         kube-system
	f4a0b61582ab3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   e494954752c0f       coredns-66bc5c9577-vvxxj                    kube-system
	c5b5a1c18d286       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   46b7fe6eaddbb       kube-proxy-x5bxp                            kube-system
	e3e4a1f5db778       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   1c7dc9dc3acc4       kube-scheduler-addons-347541                kube-system
	da6f6f7fbcb40       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   5c3481bbc735f       kube-controller-manager-addons-347541       kube-system
	993da190cfa74       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   f0a593f81c8ac       etcd-addons-347541                          kube-system
	e4a66ca19ad2e       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   61e9bb926854d       kube-apiserver-addons-347541                kube-system
	
	
	==> coredns [f4a0b61582ab330327b4249677dc6b464244ac28ab0a195dc7dbdf4a6dbf6b28] <==
	[INFO] 10.244.0.8:46060 - 9455 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000227976s
	[INFO] 10.244.0.8:46060 - 26533 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000106511s
	[INFO] 10.244.0.8:46060 - 2919 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000124641s
	[INFO] 10.244.0.8:46060 - 45545 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000167789s
	[INFO] 10.244.0.8:46060 - 12019 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000234577s
	[INFO] 10.244.0.8:46060 - 5588 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000114497s
	[INFO] 10.244.0.8:46060 - 4388 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000107016s
	[INFO] 10.244.0.8:38848 - 48219 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106622s
	[INFO] 10.244.0.8:38848 - 47875 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000315498s
	[INFO] 10.244.0.8:56476 - 34446 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114442s
	[INFO] 10.244.0.8:56476 - 34715 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106716s
	[INFO] 10.244.0.8:37845 - 40957 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088945s
	[INFO] 10.244.0.8:37845 - 40699 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000220127s
	[INFO] 10.244.0.8:57229 - 20844 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075743s
	[INFO] 10.244.0.8:57229 - 21074 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000222914s
	[INFO] 10.244.0.23:48114 - 7782 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000371664s
	[INFO] 10.244.0.23:47280 - 11813 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00042697s
	[INFO] 10.244.0.23:54307 - 61588 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154412s
	[INFO] 10.244.0.23:39259 - 52464 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114222s
	[INFO] 10.244.0.23:52429 - 21285 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092595s
	[INFO] 10.244.0.23:58630 - 9848 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00025888s
	[INFO] 10.244.0.23:56600 - 1549 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004322962s
	[INFO] 10.244.0.23:43005 - 10322 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.004139184s
	[INFO] 10.244.0.26:54377 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000319335s
	[INFO] 10.244.0.26:55768 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00039087s
	
	
	==> describe nodes <==
	Name:               addons-347541
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-347541
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=addons-347541
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T19_30_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-347541
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 19:30:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-347541
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 19:35:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 19:33:10 +0000   Fri, 12 Dec 2025 19:30:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 19:33:10 +0000   Fri, 12 Dec 2025 19:30:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 19:33:10 +0000   Fri, 12 Dec 2025 19:30:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 19:33:10 +0000   Fri, 12 Dec 2025 19:30:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    addons-347541
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 b1fb684fda1f46759f4baa96973add54
	  System UUID:                b1fb684f-da1f-4675-9f4b-aa96973add54
	  Boot ID:                    52ef5230-d59b-4f34-a260-06b6298107c5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m10s
	  default                     hello-world-app-5d498dc89-qwv5d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-hppl2    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m32s
	  kube-system                 amd-gpu-device-plugin-2xl4r                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 coredns-66bc5c9577-vvxxj                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m41s
	  kube-system                 etcd-addons-347541                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m46s
	  kube-system                 kube-apiserver-addons-347541                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-controller-manager-addons-347541        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-proxy-x5bxp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-scheduler-addons-347541                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  local-path-storage          local-path-provisioner-648f6765c9-4tdnr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m39s                  kube-proxy       
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node addons-347541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node addons-347541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node addons-347541 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m46s                  kubelet          Node addons-347541 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s                  kubelet          Node addons-347541 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s                  kubelet          Node addons-347541 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m45s                  kubelet          Node addons-347541 status is now: NodeReady
	  Normal  RegisteredNode           4m42s                  node-controller  Node addons-347541 event: Registered Node addons-347541 in Controller
	
	
	==> dmesg <==
	[  +0.479536] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.978987] kauditd_printk_skb: 318 callbacks suppressed
	[  +0.393842] kauditd_printk_skb: 380 callbacks suppressed
	[  +1.060454] kauditd_printk_skb: 315 callbacks suppressed
	[Dec12 19:31] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.128736] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.326519] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.131008] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.163427] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.021516] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.550965] kauditd_printk_skb: 121 callbacks suppressed
	[  +1.002231] kauditd_printk_skb: 140 callbacks suppressed
	[Dec12 19:32] kauditd_printk_skb: 61 callbacks suppressed
	[  +9.440573] kauditd_printk_skb: 68 callbacks suppressed
	[  +2.052464] kauditd_printk_skb: 53 callbacks suppressed
	[ +10.765154] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.903422] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.721002] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.512297] kauditd_printk_skb: 129 callbacks suppressed
	[  +3.578938] kauditd_printk_skb: 204 callbacks suppressed
	[Dec12 19:33] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.000050] kauditd_printk_skb: 83 callbacks suppressed
	[  +5.850244] kauditd_printk_skb: 41 callbacks suppressed
	[Dec12 19:35] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [993da190cfa742a9b8f23f3ae63ccae627606b22fdf705984f1cd8e26c9054f5] <==
	{"level":"info","ts":"2025-12-12T19:31:36.266314Z","caller":"traceutil/trace.go:172","msg":"trace[644682985] transaction","detail":"{read_only:false; response_revision:1026; number_of_response:1; }","duration":"124.31474ms","start":"2025-12-12T19:31:36.141985Z","end":"2025-12-12T19:31:36.266300Z","steps":["trace[644682985] 'process raft request'  (duration: 124.180884ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T19:31:49.684443Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.231496ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaims\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T19:31:49.684509Z","caller":"traceutil/trace.go:172","msg":"trace[484670056] range","detail":"{range_begin:/registry/resourceclaims; range_end:; response_count:0; response_revision:1101; }","duration":"182.306912ms","start":"2025-12-12T19:31:49.502192Z","end":"2025-12-12T19:31:49.684499Z","steps":["trace[484670056] 'range keys from in-memory index tree'  (duration: 182.109918ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T19:31:49.684669Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.261649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T19:31:49.684708Z","caller":"traceutil/trace.go:172","msg":"trace[1925181992] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1101; }","duration":"138.300427ms","start":"2025-12-12T19:31:49.546400Z","end":"2025-12-12T19:31:49.684701Z","steps":["trace[1925181992] 'range keys from in-memory index tree'  (duration: 138.222072ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T19:32:02.699962Z","caller":"traceutil/trace.go:172","msg":"trace[292230520] linearizableReadLoop","detail":"{readStateIndex:1206; appliedIndex:1206; }","duration":"154.3551ms","start":"2025-12-12T19:32:02.545567Z","end":"2025-12-12T19:32:02.699923Z","steps":["trace[292230520] 'read index received'  (duration: 154.349023ms)","trace[292230520] 'applied index is now lower than readState.Index'  (duration: 5.275µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T19:32:02.700125Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.54331ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T19:32:02.700146Z","caller":"traceutil/trace.go:172","msg":"trace[2051985376] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1172; }","duration":"154.576987ms","start":"2025-12-12T19:32:02.545563Z","end":"2025-12-12T19:32:02.700140Z","steps":["trace[2051985376] 'agreement among raft nodes before linearized reading'  (duration: 154.516893ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T19:32:02.700148Z","caller":"traceutil/trace.go:172","msg":"trace[1238524007] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"282.517082ms","start":"2025-12-12T19:32:02.417621Z","end":"2025-12-12T19:32:02.700138Z","steps":["trace[1238524007] 'process raft request'  (duration: 282.423215ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-12T19:32:38.516037Z","caller":"traceutil/trace.go:172","msg":"trace[2131806319] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"102.295139ms","start":"2025-12-12T19:32:38.413727Z","end":"2025-12-12T19:32:38.516022Z","steps":["trace[2131806319] 'process raft request'  (duration: 102.193907ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T19:32:40.233037Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"258.712205ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T19:32:40.233098Z","caller":"traceutil/trace.go:172","msg":"trace[234955170] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1387; }","duration":"258.785979ms","start":"2025-12-12T19:32:39.974300Z","end":"2025-12-12T19:32:40.233086Z","steps":["trace[234955170] 'agreement among raft nodes before linearized reading'  (duration: 28.947629ms)","trace[234955170] 'range keys from in-memory index tree'  (duration: 229.734196ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T19:32:40.233382Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"229.808262ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7391140448856292128 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/metrics-server\" mod_revision:599 > success:<request_delete_range:<key:\"/registry/serviceaccounts/kube-system/metrics-server\" > > failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/metrics-server\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-12-12T19:32:40.233436Z","caller":"traceutil/trace.go:172","msg":"trace[1904781771] linearizableReadLoop","detail":"{readStateIndex:1430; appliedIndex:1429; }","duration":"230.233016ms","start":"2025-12-12T19:32:40.003196Z","end":"2025-12-12T19:32:40.233429Z","steps":["trace[1904781771] 'read index received'  (duration: 39.799µs)","trace[1904781771] 'applied index is now lower than readState.Index'  (duration: 230.192774ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-12T19:32:40.233769Z","caller":"traceutil/trace.go:172","msg":"trace[1977029529] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1388; }","duration":"279.207722ms","start":"2025-12-12T19:32:39.954551Z","end":"2025-12-12T19:32:40.233759Z","steps":["trace[1977029529] 'process raft request'  (duration: 48.735418ms)","trace[1977029529] 'compare'  (duration: 229.635665ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-12T19:32:40.233978Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.600086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/yakd-dashboard-5ff678cb9-g8n6k\" limit:1 ","response":"range_response_count:1 size:4671"}
	{"level":"info","ts":"2025-12-12T19:32:40.233996Z","caller":"traceutil/trace.go:172","msg":"trace[1926552127] range","detail":"{range_begin:/registry/pods/yakd-dashboard/yakd-dashboard-5ff678cb9-g8n6k; range_end:; response_count:1; response_revision:1388; }","duration":"254.622071ms","start":"2025-12-12T19:32:39.979369Z","end":"2025-12-12T19:32:40.233992Z","steps":["trace[1926552127] 'agreement among raft nodes before linearized reading'  (duration: 254.543767ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T19:32:40.234119Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.307343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-85b7d694d7-tmr5k\" limit:1 ","response":"range_response_count:1 size:4650"}
	{"level":"info","ts":"2025-12-12T19:32:40.234132Z","caller":"traceutil/trace.go:172","msg":"trace[1772879783] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-85b7d694d7-tmr5k; range_end:; response_count:1; response_revision:1388; }","duration":"255.323133ms","start":"2025-12-12T19:32:39.978805Z","end":"2025-12-12T19:32:40.234128Z","steps":["trace[1772879783] 'agreement among raft nodes before linearized reading'  (duration: 255.273723ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T19:32:40.234294Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.505703ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T19:32:40.234311Z","caller":"traceutil/trace.go:172","msg":"trace[2075638050] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1388; }","duration":"153.524604ms","start":"2025-12-12T19:32:40.080782Z","end":"2025-12-12T19:32:40.234307Z","steps":["trace[2075638050] 'agreement among raft nodes before linearized reading'  (duration: 153.491733ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T19:33:33.154421Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.378021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-12-12T19:33:33.154506Z","caller":"traceutil/trace.go:172","msg":"trace[858335737] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:1864; }","duration":"112.476775ms","start":"2025-12-12T19:33:33.042019Z","end":"2025-12-12T19:33:33.154496Z","steps":["trace[858335737] 'range keys from in-memory index tree'  (duration: 112.157622ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-12T19:33:33.154826Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.883006ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-12T19:33:33.154912Z","caller":"traceutil/trace.go:172","msg":"trace[626470589] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:1864; }","duration":"113.002623ms","start":"2025-12-12T19:33:33.041901Z","end":"2025-12-12T19:33:33.154904Z","steps":["trace[626470589] 'range keys from in-memory index tree'  (duration: 112.82436ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:35:23 up 5 min,  0 users,  load average: 0.21, 0.72, 0.40
	Linux addons-347541 6.6.95 #1 SMP PREEMPT_DYNAMIC Fri Dec 12 05:38:44 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e4a66ca19ad2e53e2d61831c7925ed52e18848199736d2911d817c709827eda5] <==
	 > logger="UnhandledError"
	E1212 19:31:28.477186       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.71.121:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.71.121:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.71.121:443: connect: connection refused" logger="UnhandledError"
	E1212 19:31:28.477854       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.71.121:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.71.121:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.71.121:443: connect: connection refused" logger="UnhandledError"
	I1212 19:31:28.551841       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 19:32:25.520624       1 conn.go:339] Error on socket receive: read tcp 192.168.39.202:8443->192.168.39.1:57530: use of closed network connection
	E1212 19:32:25.699849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.202:8443->192.168.39.1:57572: use of closed network connection
	I1212 19:32:34.805123       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.84.172"}
	I1212 19:32:54.111764       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 19:32:54.300308       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.209.242"}
	I1212 19:33:07.502455       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1212 19:33:29.486902       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1212 19:33:31.596941       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 19:33:31.597423       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 19:33:31.629618       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 19:33:31.631785       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 19:33:31.664026       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 19:33:31.664087       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 19:33:31.674804       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 19:33:31.674900       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1212 19:33:31.801159       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1212 19:33:31.801370       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1212 19:33:32.664286       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1212 19:33:32.802612       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1212 19:33:32.826429       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1212 19:35:21.815562       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.90.11"}
	
	
	==> kube-controller-manager [da6f6f7fbcb408d5e35a803b554ef80fbe5b17a42b5c7b4dffc8e376aff7c5d3] <==
	I1212 19:33:41.109270       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 19:33:41.154742       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 19:33:41.154782       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1212 19:33:41.835177       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:33:41.836457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:33:43.720866       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:33:43.721784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:33:47.531952       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:33:47.532933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:33:51.157880       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:33:51.158941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:33:54.920200       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:33:54.921164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:34:06.741556       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:34:06.742762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:34:16.338958       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:34:16.339884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:34:16.341809       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:34:16.342720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:34:38.681703       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:34:38.683080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:34:45.786612       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:34:45.787616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1212 19:34:59.010192       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1212 19:34:59.011419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [c5b5a1c18d28614bf90454b948d289136b0f20c9341fe303e084b91bd607c3c0] <==
	I1212 19:30:43.661474       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 19:30:43.762593       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 19:30:43.763408       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.202"]
	E1212 19:30:43.764264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 19:30:43.949198       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 19:30:43.949588       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 19:30:43.949914       1 server_linux.go:132] "Using iptables Proxier"
	I1212 19:30:43.966040       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 19:30:43.967118       1 server.go:527] "Version info" version="v1.34.2"
	I1212 19:30:43.968073       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 19:30:43.972932       1 config.go:200] "Starting service config controller"
	I1212 19:30:43.972957       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 19:30:43.972972       1 config.go:106] "Starting endpoint slice config controller"
	I1212 19:30:43.972975       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 19:30:43.972985       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 19:30:43.972988       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 19:30:43.979580       1 config.go:309] "Starting node config controller"
	I1212 19:30:43.979605       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 19:30:43.979670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 19:30:44.073958       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 19:30:44.073976       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 19:30:44.073990       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e3e4a1f5db77865ef545f2488910456b56b11d204019cb86b5f5c0cc1d270cc0] <==
	E1212 19:30:33.517815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 19:30:33.517970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 19:30:33.518139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 19:30:33.518183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 19:30:33.518797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 19:30:34.322323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 19:30:34.372754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 19:30:34.383137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 19:30:34.413931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 19:30:34.433357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 19:30:34.488002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 19:30:34.514004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 19:30:34.522340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 19:30:34.549670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 19:30:34.679731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 19:30:34.703912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1212 19:30:34.713458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 19:30:34.714105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 19:30:34.844773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 19:30:34.944068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 19:30:34.994477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 19:30:35.021036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 19:30:35.067265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 19:30:35.114701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1212 19:30:37.507973       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 19:33:40 addons-347541 kubelet[1502]: I1212 19:33:40.665540    1502 scope.go:117] "RemoveContainer" containerID="cf69239fad9f106ef4c497631e55fa6e89f2319c68106ffc2cdeffa0be2d0619"
	Dec 12 19:33:40 addons-347541 kubelet[1502]: I1212 19:33:40.785182    1502 scope.go:117] "RemoveContainer" containerID="3b3e3f1179bc8c11c310a9f2033fb3f323372b82c6aea0fc1f9032b498d6c8d7"
	Dec 12 19:33:47 addons-347541 kubelet[1502]: E1212 19:33:47.291488    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568027290683622 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:33:47 addons-347541 kubelet[1502]: E1212 19:33:47.291537    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568027290683622 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:33:57 addons-347541 kubelet[1502]: E1212 19:33:57.295763    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568037295300766 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:33:57 addons-347541 kubelet[1502]: E1212 19:33:57.295802    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568037295300766 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:07 addons-347541 kubelet[1502]: E1212 19:34:07.297966    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568047297568250 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:07 addons-347541 kubelet[1502]: E1212 19:34:07.297991    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568047297568250 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:17 addons-347541 kubelet[1502]: E1212 19:34:17.300561    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568057300065143 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:17 addons-347541 kubelet[1502]: E1212 19:34:17.300587    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568057300065143 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:27 addons-347541 kubelet[1502]: E1212 19:34:27.303520    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568067302782573 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:27 addons-347541 kubelet[1502]: E1212 19:34:27.303543    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568067302782573 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:37 addons-347541 kubelet[1502]: E1212 19:34:37.307495    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568077307235351 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:37 addons-347541 kubelet[1502]: E1212 19:34:37.307514    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568077307235351 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:47 addons-347541 kubelet[1502]: E1212 19:34:47.310028    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568087309721205 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:47 addons-347541 kubelet[1502]: E1212 19:34:47.310049    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568087309721205 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:50 addons-347541 kubelet[1502]: I1212 19:34:50.079912    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-2xl4r" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 19:34:57 addons-347541 kubelet[1502]: E1212 19:34:57.313262    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568097312553772 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:57 addons-347541 kubelet[1502]: E1212 19:34:57.313284    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568097312553772 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:34:59 addons-347541 kubelet[1502]: I1212 19:34:59.079330    1502 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 19:35:07 addons-347541 kubelet[1502]: E1212 19:35:07.316532    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568107316101700 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:35:07 addons-347541 kubelet[1502]: E1212 19:35:07.316554    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568107316101700 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:35:17 addons-347541 kubelet[1502]: E1212 19:35:17.319545    1502 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765568117319172287 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:35:17 addons-347541 kubelet[1502]: E1212 19:35:17.319582    1502 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765568117319172287 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 12 19:35:21 addons-347541 kubelet[1502]: I1212 19:35:21.794355    1502 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjtdj\" (UniqueName: \"kubernetes.io/projected/294e2720-bcd8-4163-9911-1ef5a6bbc9ba-kube-api-access-mjtdj\") pod \"hello-world-app-5d498dc89-qwv5d\" (UID: \"294e2720-bcd8-4163-9911-1ef5a6bbc9ba\") " pod="default/hello-world-app-5d498dc89-qwv5d"
	
	
	==> storage-provisioner [aa121a6614b9cb7f5e3f51937ba612d6ce2cf89d1dde25294b15039e14722e83] <==
	W1212 19:34:57.688145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:34:59.691582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:34:59.695818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:01.699198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:01.706917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:03.709926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:03.714810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:05.717664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:05.724341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:07.727671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:07.732096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:09.735653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:09.742922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:11.747057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:11.751522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:13.754866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:13.761404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:15.764568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:15.769199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:17.772055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:17.777084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:19.780276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:19.784786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:21.788822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 19:35:21.799347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-347541 -n addons-347541
helpers_test.go:270: (dbg) Run:  kubectl --context addons-347541 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-qwv5d ingress-nginx-admission-create-pdz68 ingress-nginx-admission-patch-twfg2
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-347541 describe pod hello-world-app-5d498dc89-qwv5d ingress-nginx-admission-create-pdz68 ingress-nginx-admission-patch-twfg2
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-347541 describe pod hello-world-app-5d498dc89-qwv5d ingress-nginx-admission-create-pdz68 ingress-nginx-admission-patch-twfg2: exit status 1 (82.609843ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-qwv5d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-347541/192.168.39.202
	Start Time:       Fri, 12 Dec 2025 19:35:21 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjtdj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mjtdj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-qwv5d to addons-347541
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pdz68" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-twfg2" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-347541 describe pod hello-world-app-5d498dc89-qwv5d ingress-nginx-admission-create-pdz68 ingress-nginx-admission-patch-twfg2: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 addons disable ingress-dns --alsologtostderr -v=1: (1.649720367s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 addons disable ingress --alsologtostderr -v=1: (7.68881061s)
--- FAIL: TestAddons/parallel/Ingress (159.37s)

                                                
                                    
x
+
TestPreload (144.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-056213 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1212 20:24:07.920649  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-056213 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m27.267222232s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-056213 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-056213 image pull gcr.io/k8s-minikube/busybox: (3.635372282s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-056213
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-056213: (8.371503416s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-056213 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-056213 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (43.127348495s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-056213 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-12 20:25:56.017242101 +0000 UTC m=+3408.660843470
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-056213 -n test-preload-056213
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-056213 logs -n 25
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-943484 ssh -n multinode-943484-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:13 UTC │
	│ ssh     │ multinode-943484 ssh -n multinode-943484 sudo cat /home/docker/cp-test_multinode-943484-m03_multinode-943484.txt                                          │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:13 UTC │
	│ cp      │ multinode-943484 cp multinode-943484-m03:/home/docker/cp-test.txt multinode-943484-m02:/home/docker/cp-test_multinode-943484-m03_multinode-943484-m02.txt │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:13 UTC │
	│ ssh     │ multinode-943484 ssh -n multinode-943484-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:13 UTC │
	│ ssh     │ multinode-943484 ssh -n multinode-943484-m02 sudo cat /home/docker/cp-test_multinode-943484-m03_multinode-943484-m02.txt                                  │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:13 UTC │
	│ node    │ multinode-943484 node stop m03                                                                                                                            │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:13 UTC │
	│ node    │ multinode-943484 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:13 UTC │
	│ node    │ list -p multinode-943484                                                                                                                                  │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ stop    │ -p multinode-943484                                                                                                                                       │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:16 UTC │
	│ start   │ -p multinode-943484 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:16 UTC │ 12 Dec 25 20:18 UTC │
	│ node    │ list -p multinode-943484                                                                                                                                  │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:18 UTC │                     │
	│ node    │ multinode-943484 node delete m03                                                                                                                          │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:18 UTC │ 12 Dec 25 20:18 UTC │
	│ stop    │ multinode-943484 stop                                                                                                                                     │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:18 UTC │ 12 Dec 25 20:21 UTC │
	│ start   │ -p multinode-943484 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:21 UTC │ 12 Dec 25 20:22 UTC │
	│ node    │ list -p multinode-943484                                                                                                                                  │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ start   │ -p multinode-943484-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-943484-m02 │ jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │                     │
	│ start   │ -p multinode-943484-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-943484-m03 │ jenkins │ v1.37.0 │ 12 Dec 25 20:22 UTC │ 12 Dec 25 20:23 UTC │
	│ node    │ add -p multinode-943484                                                                                                                                   │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:23 UTC │                     │
	│ delete  │ -p multinode-943484-m03                                                                                                                                   │ multinode-943484-m03 │ jenkins │ v1.37.0 │ 12 Dec 25 20:23 UTC │ 12 Dec 25 20:23 UTC │
	│ delete  │ -p multinode-943484                                                                                                                                       │ multinode-943484     │ jenkins │ v1.37.0 │ 12 Dec 25 20:23 UTC │ 12 Dec 25 20:23 UTC │
	│ start   │ -p test-preload-056213 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-056213  │ jenkins │ v1.37.0 │ 12 Dec 25 20:23 UTC │ 12 Dec 25 20:25 UTC │
	│ image   │ test-preload-056213 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-056213  │ jenkins │ v1.37.0 │ 12 Dec 25 20:25 UTC │ 12 Dec 25 20:25 UTC │
	│ stop    │ -p test-preload-056213                                                                                                                                    │ test-preload-056213  │ jenkins │ v1.37.0 │ 12 Dec 25 20:25 UTC │ 12 Dec 25 20:25 UTC │
	│ start   │ -p test-preload-056213 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-056213  │ jenkins │ v1.37.0 │ 12 Dec 25 20:25 UTC │ 12 Dec 25 20:25 UTC │
	│ image   │ test-preload-056213 image list                                                                                                                            │ test-preload-056213  │ jenkins │ v1.37.0 │ 12 Dec 25 20:25 UTC │ 12 Dec 25 20:25 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:25:12
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:25:12.758953  166153 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:25:12.759221  166153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:25:12.759229  166153 out.go:374] Setting ErrFile to fd 2...
	I1212 20:25:12.759234  166153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:25:12.759425  166153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:25:12.759858  166153 out.go:368] Setting JSON to false
	I1212 20:25:12.760711  166153 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7653,"bootTime":1765563460,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:25:12.760765  166153 start.go:143] virtualization: kvm guest
	I1212 20:25:12.762629  166153 out.go:179] * [test-preload-056213] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:25:12.763663  166153 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:25:12.763698  166153 notify.go:221] Checking for updates...
	I1212 20:25:12.765491  166153 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:25:12.766524  166153 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 20:25:12.767485  166153 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 20:25:12.768556  166153 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:25:12.769906  166153 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:25:12.772601  166153 config.go:182] Loaded profile config "test-preload-056213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:25:12.773154  166153 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:25:12.807693  166153 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 20:25:12.808720  166153 start.go:309] selected driver: kvm2
	I1212 20:25:12.808743  166153 start.go:927] validating driver "kvm2" against &{Name:test-preload-056213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-056213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:25:12.808992  166153 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:25:12.810383  166153 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:25:12.810416  166153 cni.go:84] Creating CNI manager for ""
	I1212 20:25:12.810493  166153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:25:12.810555  166153 start.go:353] cluster config:
	{Name:test-preload-056213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-056213 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:25:12.810665  166153 iso.go:125] acquiring lock: {Name:mka604e7c5a779b48764eb6b2b4a8a1c6683346a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:25:12.812171  166153 out.go:179] * Starting "test-preload-056213" primary control-plane node in "test-preload-056213" cluster
	I1212 20:25:12.813269  166153 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:25:12.813318  166153 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:25:12.813331  166153 cache.go:65] Caching tarball of preloaded images
	I1212 20:25:12.813420  166153 preload.go:238] Found /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:25:12.813432  166153 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:25:12.813517  166153 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/config.json ...
	I1212 20:25:12.813734  166153 start.go:360] acquireMachinesLock for test-preload-056213: {Name:mk1985c179f459a7b1b82780fe7717dfacfba5d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:25:12.813791  166153 start.go:364] duration metric: took 37.323µs to acquireMachinesLock for "test-preload-056213"
	I1212 20:25:12.813807  166153 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:25:12.813813  166153 fix.go:54] fixHost starting: 
	I1212 20:25:12.815572  166153 fix.go:112] recreateIfNeeded on test-preload-056213: state=Stopped err=<nil>
	W1212 20:25:12.815597  166153 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:25:12.816996  166153 out.go:252] * Restarting existing kvm2 VM for "test-preload-056213" ...
	I1212 20:25:12.817061  166153 main.go:143] libmachine: starting domain...
	I1212 20:25:12.817084  166153 main.go:143] libmachine: ensuring networks are active...
	I1212 20:25:12.817871  166153 main.go:143] libmachine: Ensuring network default is active
	I1212 20:25:12.818354  166153 main.go:143] libmachine: Ensuring network mk-test-preload-056213 is active
	I1212 20:25:12.818873  166153 main.go:143] libmachine: getting domain XML...
	I1212 20:25:12.819926  166153 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-056213</name>
	  <uuid>dee7524e-8494-490f-b85f-33872852442c</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/test-preload-056213/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/test-preload-056213/test-preload-056213.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:e8:16:5e'/>
	      <source network='mk-test-preload-056213'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:45:e2:61'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1212 20:25:14.070355  166153 main.go:143] libmachine: waiting for domain to start...
	I1212 20:25:14.071714  166153 main.go:143] libmachine: domain is now running
	I1212 20:25:14.071731  166153 main.go:143] libmachine: waiting for IP...
	I1212 20:25:14.072529  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:14.073074  166153 main.go:143] libmachine: domain test-preload-056213 has current primary IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:14.073086  166153 main.go:143] libmachine: found domain IP: 192.168.39.204
	I1212 20:25:14.073093  166153 main.go:143] libmachine: reserving static IP address...
	I1212 20:25:14.073460  166153 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-056213", mac: "52:54:00:e8:16:5e", ip: "192.168.39.204"} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:23:48 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:14.073484  166153 main.go:143] libmachine: skip adding static IP to network mk-test-preload-056213 - found existing host DHCP lease matching {name: "test-preload-056213", mac: "52:54:00:e8:16:5e", ip: "192.168.39.204"}
	I1212 20:25:14.073515  166153 main.go:143] libmachine: reserved static IP address 192.168.39.204 for domain test-preload-056213
	I1212 20:25:14.073526  166153 main.go:143] libmachine: waiting for SSH...
	I1212 20:25:14.073535  166153 main.go:143] libmachine: Getting to WaitForSSH function...
	I1212 20:25:14.075905  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:14.076255  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:23:48 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:14.076281  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:14.076442  166153 main.go:143] libmachine: Using SSH client type: native
	I1212 20:25:14.076704  166153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1212 20:25:14.076718  166153 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1212 20:25:17.187367  166153 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.204:22: connect: no route to host
	I1212 20:25:23.267341  166153 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.204:22: connect: no route to host
	I1212 20:25:26.376712  166153 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:25:26.380156  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.380630  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:26.380660  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.380886  166153 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/config.json ...
	I1212 20:25:26.381084  166153 machine.go:94] provisionDockerMachine start ...
	I1212 20:25:26.383048  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.383368  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:26.383397  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.383545  166153 main.go:143] libmachine: Using SSH client type: native
	I1212 20:25:26.383745  166153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1212 20:25:26.383755  166153 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:25:26.494668  166153 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 20:25:26.494705  166153 buildroot.go:166] provisioning hostname "test-preload-056213"
	I1212 20:25:26.497423  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.497800  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:26.497827  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.498004  166153 main.go:143] libmachine: Using SSH client type: native
	I1212 20:25:26.498255  166153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1212 20:25:26.498271  166153 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-056213 && echo "test-preload-056213" | sudo tee /etc/hostname
	I1212 20:25:26.617994  166153 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-056213
	
	I1212 20:25:26.621132  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.621606  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:26.621646  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.621872  166153 main.go:143] libmachine: Using SSH client type: native
	I1212 20:25:26.622117  166153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1212 20:25:26.622141  166153 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-056213' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-056213/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-056213' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:25:26.736955  166153 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:25:26.736988  166153 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22112-135957/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-135957/.minikube}
	I1212 20:25:26.737040  166153 buildroot.go:174] setting up certificates
	I1212 20:25:26.737055  166153 provision.go:84] configureAuth start
	I1212 20:25:26.740048  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.740449  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:26.740480  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.742774  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.743146  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:26.743171  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.743312  166153 provision.go:143] copyHostCerts
	I1212 20:25:26.743390  166153 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-135957/.minikube/cert.pem, removing ...
	I1212 20:25:26.743410  166153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-135957/.minikube/cert.pem
	I1212 20:25:26.743497  166153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/cert.pem (1123 bytes)
	I1212 20:25:26.743679  166153 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-135957/.minikube/key.pem, removing ...
	I1212 20:25:26.743692  166153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-135957/.minikube/key.pem
	I1212 20:25:26.743740  166153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/key.pem (1675 bytes)
	I1212 20:25:26.743828  166153 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-135957/.minikube/ca.pem, removing ...
	I1212 20:25:26.743839  166153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.pem
	I1212 20:25:26.743877  166153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/ca.pem (1078 bytes)
	I1212 20:25:26.743942  166153 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem org=jenkins.test-preload-056213 san=[127.0.0.1 192.168.39.204 localhost minikube test-preload-056213]
	I1212 20:25:26.773082  166153 provision.go:177] copyRemoteCerts
	I1212 20:25:26.773154  166153 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:25:26.775400  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.775722  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:26.775750  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.775915  166153 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/test-preload-056213/id_rsa Username:docker}
	I1212 20:25:26.860653  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:25:26.892810  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:25:26.924574  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 20:25:26.951971  166153 provision.go:87] duration metric: took 214.884912ms to configureAuth
	I1212 20:25:26.952001  166153 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:25:26.952204  166153 config.go:182] Loaded profile config "test-preload-056213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:25:26.955231  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.955598  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:26.955623  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:26.955824  166153 main.go:143] libmachine: Using SSH client type: native
	I1212 20:25:26.956051  166153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1212 20:25:26.956071  166153 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:25:27.193166  166153 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:25:27.193192  166153 machine.go:97] duration metric: took 812.095085ms to provisionDockerMachine
	I1212 20:25:27.193207  166153 start.go:293] postStartSetup for "test-preload-056213" (driver="kvm2")
	I1212 20:25:27.193220  166153 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:25:27.193293  166153 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:25:27.195788  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.196232  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:27.196258  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.196379  166153 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/test-preload-056213/id_rsa Username:docker}
	I1212 20:25:27.282933  166153 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:25:27.287720  166153 info.go:137] Remote host: Buildroot 2025.02
	I1212 20:25:27.287748  166153 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/addons for local assets ...
	I1212 20:25:27.287830  166153 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/files for local assets ...
	I1212 20:25:27.287935  166153 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem -> 1399952.pem in /etc/ssl/certs
	I1212 20:25:27.288058  166153 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:25:27.299248  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:25:27.326871  166153 start.go:296] duration metric: took 133.647146ms for postStartSetup
	I1212 20:25:27.326912  166153 fix.go:56] duration metric: took 14.513098502s for fixHost
	I1212 20:25:27.329453  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.329787  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:27.329812  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.330065  166153 main.go:143] libmachine: Using SSH client type: native
	I1212 20:25:27.330296  166153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I1212 20:25:27.330306  166153 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1212 20:25:27.435765  166153 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765571127.400819058
	
	I1212 20:25:27.435791  166153 fix.go:216] guest clock: 1765571127.400819058
	I1212 20:25:27.435798  166153 fix.go:229] Guest: 2025-12-12 20:25:27.400819058 +0000 UTC Remote: 2025-12-12 20:25:27.326915984 +0000 UTC m=+14.616361578 (delta=73.903074ms)
	I1212 20:25:27.435816  166153 fix.go:200] guest clock delta is within tolerance: 73.903074ms
	I1212 20:25:27.435821  166153 start.go:83] releasing machines lock for "test-preload-056213", held for 14.622020795s
	I1212 20:25:27.438559  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.438949  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:27.438973  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.439476  166153 ssh_runner.go:195] Run: cat /version.json
	I1212 20:25:27.439551  166153 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:25:27.442481  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.442557  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.442906  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:27.442938  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:27.442947  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.442972  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:27.443159  166153 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/test-preload-056213/id_rsa Username:docker}
	I1212 20:25:27.443169  166153 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/test-preload-056213/id_rsa Username:docker}
	I1212 20:25:27.522840  166153 ssh_runner.go:195] Run: systemctl --version
	I1212 20:25:27.549293  166153 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:25:27.695097  166153 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:25:27.701922  166153 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:25:27.702034  166153 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:25:27.721138  166153 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 20:25:27.721171  166153 start.go:496] detecting cgroup driver to use...
	I1212 20:25:27.721241  166153 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:25:27.740364  166153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:25:27.756867  166153 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:25:27.756962  166153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:25:27.773628  166153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:25:27.789064  166153 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:25:27.929470  166153 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:25:28.138162  166153 docker.go:234] disabling docker service ...
	I1212 20:25:28.138253  166153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:25:28.154134  166153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:25:28.168140  166153 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:25:28.320678  166153 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:25:28.456225  166153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:25:28.471322  166153 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:25:28.492904  166153 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:25:28.492967  166153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:25:28.504631  166153 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:25:28.504726  166153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:25:28.516573  166153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:25:28.528169  166153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:25:28.540719  166153 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:25:28.552837  166153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:25:28.564530  166153 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:25:28.584037  166153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:25:28.595983  166153 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:25:28.605828  166153 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 20:25:28.605918  166153 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 20:25:28.625331  166153 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:25:28.636937  166153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:25:28.775802  166153 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:25:28.879263  166153 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:25:28.879355  166153 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:25:28.884950  166153 start.go:564] Will wait 60s for crictl version
	I1212 20:25:28.885024  166153 ssh_runner.go:195] Run: which crictl
	I1212 20:25:28.888970  166153 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:25:28.920415  166153 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 20:25:28.920516  166153 ssh_runner.go:195] Run: crio --version
	I1212 20:25:28.948334  166153 ssh_runner.go:195] Run: crio --version
	I1212 20:25:28.977137  166153 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1212 20:25:28.980595  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:28.980938  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:28.980973  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:28.981170  166153 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 20:25:28.985556  166153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:25:28.999922  166153 kubeadm.go:884] updating cluster {Name:test-preload-056213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-056213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:25:29.000051  166153 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:25:29.000103  166153 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:25:29.031528  166153 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1212 20:25:29.031596  166153 ssh_runner.go:195] Run: which lz4
	I1212 20:25:29.035952  166153 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 20:25:29.040579  166153 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 20:25:29.040632  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1212 20:25:30.221221  166153 crio.go:462] duration metric: took 1.185304092s to copy over tarball
	I1212 20:25:30.221321  166153 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 20:25:31.685747  166153 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.464395934s)
	I1212 20:25:31.685774  166153 crio.go:469] duration metric: took 1.464522606s to extract the tarball
	I1212 20:25:31.685782  166153 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 20:25:31.720903  166153 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:25:31.760125  166153 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:25:31.760155  166153 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:25:31.760165  166153 kubeadm.go:935] updating node { 192.168.39.204 8443 v1.34.2 crio true true} ...
	I1212 20:25:31.760303  166153 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-056213 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-056213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:25:31.760389  166153 ssh_runner.go:195] Run: crio config
	I1212 20:25:31.804739  166153 cni.go:84] Creating CNI manager for ""
	I1212 20:25:31.804768  166153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:25:31.804791  166153 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:25:31.804820  166153 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.204 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-056213 NodeName:test-preload-056213 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:25:31.805003  166153 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-056213"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.204"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.204"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:25:31.805085  166153 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:25:31.816754  166153 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:25:31.816844  166153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:25:31.827935  166153 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1212 20:25:31.847573  166153 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:25:31.866707  166153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1212 20:25:31.887034  166153 ssh_runner.go:195] Run: grep 192.168.39.204	control-plane.minikube.internal$ /etc/hosts
	I1212 20:25:31.891422  166153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:25:31.905571  166153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:25:32.040672  166153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:25:32.081400  166153 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213 for IP: 192.168.39.204
	I1212 20:25:32.081435  166153 certs.go:195] generating shared ca certs ...
	I1212 20:25:32.081460  166153 certs.go:227] acquiring lock for ca certs: {Name:mk856e15c7830c27b8e705838c72180e3414c0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:25:32.081696  166153 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key
	I1212 20:25:32.081803  166153 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key
	I1212 20:25:32.081824  166153 certs.go:257] generating profile certs ...
	I1212 20:25:32.081976  166153 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/client.key
	I1212 20:25:32.082083  166153 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/apiserver.key.c5ca6fa0
	I1212 20:25:32.082171  166153 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/proxy-client.key
	I1212 20:25:32.082318  166153 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem (1338 bytes)
	W1212 20:25:32.082364  166153 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995_empty.pem, impossibly tiny 0 bytes
	I1212 20:25:32.082379  166153 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:25:32.082422  166153 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:25:32.082459  166153 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:25:32.082505  166153 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem (1675 bytes)
	I1212 20:25:32.082564  166153 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:25:32.083522  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:25:32.125017  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:25:32.157637  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:25:32.185921  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:25:32.213690  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 20:25:32.241086  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:25:32.269332  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:25:32.297006  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:25:32.324622  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /usr/share/ca-certificates/1399952.pem (1708 bytes)
	I1212 20:25:32.351709  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:25:32.378578  166153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem --> /usr/share/ca-certificates/139995.pem (1338 bytes)
	I1212 20:25:32.405690  166153 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:25:32.424794  166153 ssh_runner.go:195] Run: openssl version
	I1212 20:25:32.430888  166153 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1399952.pem
	I1212 20:25:32.441518  166153 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1399952.pem /etc/ssl/certs/1399952.pem
	I1212 20:25:32.452354  166153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1399952.pem
	I1212 20:25:32.457362  166153 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:43 /usr/share/ca-certificates/1399952.pem
	I1212 20:25:32.457428  166153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1399952.pem
	I1212 20:25:32.464332  166153 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:25:32.475057  166153 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1399952.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:25:32.485935  166153 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:25:32.497188  166153 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:25:32.508124  166153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:25:32.512972  166153 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:25:32.513021  166153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:25:32.519906  166153 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:25:32.530800  166153 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:25:32.541655  166153 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/139995.pem
	I1212 20:25:32.552331  166153 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/139995.pem /etc/ssl/certs/139995.pem
	I1212 20:25:32.563006  166153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139995.pem
	I1212 20:25:32.568056  166153 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:43 /usr/share/ca-certificates/139995.pem
	I1212 20:25:32.568126  166153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139995.pem
	I1212 20:25:32.575483  166153 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:25:32.587212  166153 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/139995.pem /etc/ssl/certs/51391683.0
	I1212 20:25:32.598185  166153 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:25:32.603345  166153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:25:32.610509  166153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:25:32.617509  166153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:25:32.624644  166153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:25:32.631600  166153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:25:32.638587  166153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:25:32.645636  166153 kubeadm.go:401] StartCluster: {Name:test-preload-056213 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-056213 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:25:32.645730  166153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:25:32.645784  166153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:25:32.678189  166153 cri.go:89] found id: ""
	I1212 20:25:32.678280  166153 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:25:32.690301  166153 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:25:32.690327  166153 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:25:32.690398  166153 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:25:32.702189  166153 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:25:32.702716  166153 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-056213" does not appear in /home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 20:25:32.702838  166153 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-135957/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-056213" cluster setting kubeconfig missing "test-preload-056213" context setting]
	I1212 20:25:32.703143  166153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/kubeconfig: {Name:mkab6c8db323de95c4a5daef1e17fdaffcd571ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:25:32.703721  166153 kapi.go:59] client config for test-preload-056213: &rest.Config{Host:"https://192.168.39.204:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/client.key", CAFile:"/home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:25:32.704195  166153 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:25:32.704211  166153 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:25:32.704216  166153 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:25:32.704220  166153 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:25:32.704223  166153 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:25:32.704632  166153 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:25:32.715700  166153 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.204
	I1212 20:25:32.715746  166153 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:25:32.715764  166153 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 20:25:32.715828  166153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:25:32.749870  166153 cri.go:89] found id: ""
	I1212 20:25:32.749944  166153 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:25:32.773207  166153 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:25:32.784780  166153 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:25:32.784799  166153 kubeadm.go:158] found existing configuration files:
	
	I1212 20:25:32.784849  166153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:25:32.795268  166153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:25:32.795323  166153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:25:32.806540  166153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:25:32.817199  166153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:25:32.817284  166153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:25:32.828397  166153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:25:32.838712  166153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:25:32.838793  166153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:25:32.849747  166153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:25:32.859718  166153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:25:32.859805  166153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:25:32.870995  166153 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:25:32.882168  166153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:25:32.934596  166153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:25:34.609657  166153 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.675021777s)
	I1212 20:25:34.609721  166153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:25:34.851669  166153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:25:34.929434  166153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:25:35.006303  166153 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:25:35.006414  166153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:25:35.506553  166153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:25:36.006483  166153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:25:36.506580  166153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:25:36.551193  166153 api_server.go:72] duration metric: took 1.544905529s to wait for apiserver process to appear ...
	I1212 20:25:36.551220  166153 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:25:36.551244  166153 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1212 20:25:36.551857  166153 api_server.go:269] stopped: https://192.168.39.204:8443/healthz: Get "https://192.168.39.204:8443/healthz": dial tcp 192.168.39.204:8443: connect: connection refused
	I1212 20:25:37.051544  166153 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1212 20:25:39.140306  166153 api_server.go:279] https://192.168.39.204:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:25:39.140346  166153 api_server.go:103] status: https://192.168.39.204:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:25:39.140383  166153 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1212 20:25:39.163600  166153 api_server.go:279] https://192.168.39.204:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 20:25:39.163633  166153 api_server.go:103] status: https://192.168.39.204:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 20:25:39.552247  166153 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1212 20:25:39.557147  166153 api_server.go:279] https://192.168.39.204:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:25:39.557182  166153 api_server.go:103] status: https://192.168.39.204:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:25:40.051775  166153 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1212 20:25:40.059934  166153 api_server.go:279] https://192.168.39.204:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 20:25:40.059970  166153 api_server.go:103] status: https://192.168.39.204:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 20:25:40.551613  166153 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1212 20:25:40.558992  166153 api_server.go:279] https://192.168.39.204:8443/healthz returned 200:
	ok
	I1212 20:25:40.570070  166153 api_server.go:141] control plane version: v1.34.2
	I1212 20:25:40.570117  166153 api_server.go:131] duration metric: took 4.018880331s to wait for apiserver health ...
	I1212 20:25:40.570127  166153 cni.go:84] Creating CNI manager for ""
	I1212 20:25:40.570134  166153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:25:40.571350  166153 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 20:25:40.572305  166153 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 20:25:40.586704  166153 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 20:25:40.610766  166153 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:25:40.617919  166153 system_pods.go:59] 7 kube-system pods found
	I1212 20:25:40.617991  166153 system_pods.go:61] "coredns-66bc5c9577-sd22g" [8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:25:40.618013  166153 system_pods.go:61] "etcd-test-preload-056213" [e03b97ac-1654-4d35-8968-f396cb239552] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:25:40.618031  166153 system_pods.go:61] "kube-apiserver-test-preload-056213" [e7eb94db-a8ff-4247-b8e7-e30a2ed45bcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:25:40.618046  166153 system_pods.go:61] "kube-controller-manager-test-preload-056213" [3c785ac5-8314-46eb-8a10-87b4db3778e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:25:40.618057  166153 system_pods.go:61] "kube-proxy-lmwhs" [22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 20:25:40.618072  166153 system_pods.go:61] "kube-scheduler-test-preload-056213" [6fc6f479-1e66-482a-9ddd-15867637c7ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:25:40.618080  166153 system_pods.go:61] "storage-provisioner" [9f388266-4edc-4183-a5c7-50abcb1a9ef2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:25:40.618094  166153 system_pods.go:74] duration metric: took 7.307068ms to wait for pod list to return data ...
	I1212 20:25:40.618105  166153 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:25:40.622379  166153 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 20:25:40.622415  166153 node_conditions.go:123] node cpu capacity is 2
	I1212 20:25:40.622438  166153 node_conditions.go:105] duration metric: took 4.309397ms to run NodePressure ...
	I1212 20:25:40.622516  166153 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:25:40.896388  166153 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1212 20:25:40.900028  166153 kubeadm.go:744] kubelet initialised
	I1212 20:25:40.900063  166153 kubeadm.go:745] duration metric: took 3.635549ms waiting for restarted kubelet to initialise ...
	I1212 20:25:40.900085  166153 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:25:40.914601  166153 ops.go:34] apiserver oom_adj: -16
	I1212 20:25:40.914625  166153 kubeadm.go:602] duration metric: took 8.224290896s to restartPrimaryControlPlane
	I1212 20:25:40.914639  166153 kubeadm.go:403] duration metric: took 8.26900877s to StartCluster
	I1212 20:25:40.914665  166153 settings.go:142] acquiring lock: {Name:mk2e3b99c7ed93165698abc6c533d079febb6d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:25:40.914769  166153 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 20:25:40.915384  166153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/kubeconfig: {Name:mkab6c8db323de95c4a5daef1e17fdaffcd571ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:25:40.915622  166153 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:25:40.915688  166153 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:25:40.915796  166153 addons.go:70] Setting storage-provisioner=true in profile "test-preload-056213"
	I1212 20:25:40.915816  166153 addons.go:239] Setting addon storage-provisioner=true in "test-preload-056213"
	I1212 20:25:40.915811  166153 addons.go:70] Setting default-storageclass=true in profile "test-preload-056213"
	W1212 20:25:40.915824  166153 addons.go:248] addon storage-provisioner should already be in state true
	I1212 20:25:40.915835  166153 config.go:182] Loaded profile config "test-preload-056213": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:25:40.915857  166153 host.go:66] Checking if "test-preload-056213" exists ...
	I1212 20:25:40.915842  166153 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-056213"
	I1212 20:25:40.917883  166153 out.go:179] * Verifying Kubernetes components...
	I1212 20:25:40.918106  166153 kapi.go:59] client config for test-preload-056213: &rest.Config{Host:"https://192.168.39.204:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/client.key", CAFile:"/home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:25:40.918443  166153 addons.go:239] Setting addon default-storageclass=true in "test-preload-056213"
	W1212 20:25:40.918461  166153 addons.go:248] addon default-storageclass should already be in state true
	I1212 20:25:40.918486  166153 host.go:66] Checking if "test-preload-056213" exists ...
	I1212 20:25:40.918954  166153 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:25:40.918981  166153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:25:40.919881  166153 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:25:40.919896  166153 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:25:40.920170  166153 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:25:40.920185  166153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:25:40.922730  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:40.922968  166153 main.go:143] libmachine: domain test-preload-056213 has defined MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:40.923073  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:40.923102  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:40.923262  166153 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/test-preload-056213/id_rsa Username:docker}
	I1212 20:25:40.923429  166153 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:16:5e", ip: ""} in network mk-test-preload-056213: {Iface:virbr1 ExpiryTime:2025-12-12 21:25:23 +0000 UTC Type:0 Mac:52:54:00:e8:16:5e Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:test-preload-056213 Clientid:01:52:54:00:e8:16:5e}
	I1212 20:25:40.923461  166153 main.go:143] libmachine: domain test-preload-056213 has defined IP address 192.168.39.204 and MAC address 52:54:00:e8:16:5e in network mk-test-preload-056213
	I1212 20:25:40.923595  166153 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/test-preload-056213/id_rsa Username:docker}
	I1212 20:25:41.171216  166153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:25:41.197689  166153 node_ready.go:35] waiting up to 6m0s for node "test-preload-056213" to be "Ready" ...
	I1212 20:25:41.203082  166153 node_ready.go:49] node "test-preload-056213" is "Ready"
	I1212 20:25:41.203137  166153 node_ready.go:38] duration metric: took 5.396826ms for node "test-preload-056213" to be "Ready" ...
	I1212 20:25:41.203159  166153 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:25:41.203225  166153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:25:41.231372  166153 api_server.go:72] duration metric: took 315.7144ms to wait for apiserver process to appear ...
	I1212 20:25:41.231404  166153 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:25:41.231424  166153 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I1212 20:25:41.237563  166153 api_server.go:279] https://192.168.39.204:8443/healthz returned 200:
	ok
	I1212 20:25:41.238450  166153 api_server.go:141] control plane version: v1.34.2
	I1212 20:25:41.238474  166153 api_server.go:131] duration metric: took 7.062834ms to wait for apiserver health ...
	I1212 20:25:41.238485  166153 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:25:41.241794  166153 system_pods.go:59] 7 kube-system pods found
	I1212 20:25:41.241828  166153 system_pods.go:61] "coredns-66bc5c9577-sd22g" [8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:25:41.241835  166153 system_pods.go:61] "etcd-test-preload-056213" [e03b97ac-1654-4d35-8968-f396cb239552] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:25:41.241846  166153 system_pods.go:61] "kube-apiserver-test-preload-056213" [e7eb94db-a8ff-4247-b8e7-e30a2ed45bcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:25:41.241857  166153 system_pods.go:61] "kube-controller-manager-test-preload-056213" [3c785ac5-8314-46eb-8a10-87b4db3778e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:25:41.241865  166153 system_pods.go:61] "kube-proxy-lmwhs" [22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d] Running
	I1212 20:25:41.241874  166153 system_pods.go:61] "kube-scheduler-test-preload-056213" [6fc6f479-1e66-482a-9ddd-15867637c7ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:25:41.241880  166153 system_pods.go:61] "storage-provisioner" [9f388266-4edc-4183-a5c7-50abcb1a9ef2] Running
	I1212 20:25:41.241892  166153 system_pods.go:74] duration metric: took 3.399782ms to wait for pod list to return data ...
	I1212 20:25:41.241901  166153 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:25:41.246164  166153 default_sa.go:45] found service account: "default"
	I1212 20:25:41.246193  166153 default_sa.go:55] duration metric: took 4.280378ms for default service account to be created ...
	I1212 20:25:41.246206  166153 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:25:41.248885  166153 system_pods.go:86] 7 kube-system pods found
	I1212 20:25:41.248912  166153 system_pods.go:89] "coredns-66bc5c9577-sd22g" [8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:25:41.248923  166153 system_pods.go:89] "etcd-test-preload-056213" [e03b97ac-1654-4d35-8968-f396cb239552] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 20:25:41.248937  166153 system_pods.go:89] "kube-apiserver-test-preload-056213" [e7eb94db-a8ff-4247-b8e7-e30a2ed45bcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 20:25:41.248950  166153 system_pods.go:89] "kube-controller-manager-test-preload-056213" [3c785ac5-8314-46eb-8a10-87b4db3778e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 20:25:41.248956  166153 system_pods.go:89] "kube-proxy-lmwhs" [22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d] Running
	I1212 20:25:41.248966  166153 system_pods.go:89] "kube-scheduler-test-preload-056213" [6fc6f479-1e66-482a-9ddd-15867637c7ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 20:25:41.248972  166153 system_pods.go:89] "storage-provisioner" [9f388266-4edc-4183-a5c7-50abcb1a9ef2] Running
	I1212 20:25:41.248981  166153 system_pods.go:126] duration metric: took 2.765408ms to wait for k8s-apps to be running ...
	I1212 20:25:41.248991  166153 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:25:41.249046  166153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:25:41.290660  166153 system_svc.go:56] duration metric: took 41.656647ms WaitForService to wait for kubelet
	I1212 20:25:41.290690  166153 kubeadm.go:587] duration metric: took 375.041002ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:25:41.290709  166153 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:25:41.294635  166153 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 20:25:41.294654  166153 node_conditions.go:123] node cpu capacity is 2
	I1212 20:25:41.294666  166153 node_conditions.go:105] duration metric: took 3.952122ms to run NodePressure ...
	I1212 20:25:41.294687  166153 start.go:242] waiting for startup goroutines ...
	I1212 20:25:41.299398  166153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:25:41.315093  166153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:25:41.952567  166153 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1212 20:25:41.953668  166153 addons.go:530] duration metric: took 1.037983265s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 20:25:41.953705  166153 start.go:247] waiting for cluster config update ...
	I1212 20:25:41.953719  166153 start.go:256] writing updated cluster config ...
	I1212 20:25:41.953952  166153 ssh_runner.go:195] Run: rm -f paused
	I1212 20:25:41.959222  166153 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:25:41.959727  166153 kapi.go:59] client config for test-preload-056213: &rest.Config{Host:"https://192.168.39.204:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-135957/.minikube/profiles/test-preload-056213/client.key", CAFile:"/home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:25:41.962492  166153 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sd22g" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 20:25:43.968516  166153 pod_ready.go:104] pod "coredns-66bc5c9577-sd22g" is not "Ready", error: <nil>
	W1212 20:25:46.468237  166153 pod_ready.go:104] pod "coredns-66bc5c9577-sd22g" is not "Ready", error: <nil>
	W1212 20:25:48.967616  166153 pod_ready.go:104] pod "coredns-66bc5c9577-sd22g" is not "Ready", error: <nil>
	W1212 20:25:50.968818  166153 pod_ready.go:104] pod "coredns-66bc5c9577-sd22g" is not "Ready", error: <nil>
	I1212 20:25:51.968782  166153 pod_ready.go:94] pod "coredns-66bc5c9577-sd22g" is "Ready"
	I1212 20:25:51.968815  166153 pod_ready.go:86] duration metric: took 10.006300124s for pod "coredns-66bc5c9577-sd22g" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:51.973096  166153 pod_ready.go:83] waiting for pod "etcd-test-preload-056213" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:52.978600  166153 pod_ready.go:94] pod "etcd-test-preload-056213" is "Ready"
	I1212 20:25:52.978627  166153 pod_ready.go:86] duration metric: took 1.005495492s for pod "etcd-test-preload-056213" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:52.981013  166153 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-056213" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:52.985226  166153 pod_ready.go:94] pod "kube-apiserver-test-preload-056213" is "Ready"
	I1212 20:25:52.985245  166153 pod_ready.go:86] duration metric: took 4.213437ms for pod "kube-apiserver-test-preload-056213" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:52.987523  166153 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-056213" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:52.991411  166153 pod_ready.go:94] pod "kube-controller-manager-test-preload-056213" is "Ready"
	I1212 20:25:52.991431  166153 pod_ready.go:86] duration metric: took 3.88863ms for pod "kube-controller-manager-test-preload-056213" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:53.165572  166153 pod_ready.go:83] waiting for pod "kube-proxy-lmwhs" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:53.567452  166153 pod_ready.go:94] pod "kube-proxy-lmwhs" is "Ready"
	I1212 20:25:53.567486  166153 pod_ready.go:86] duration metric: took 401.88713ms for pod "kube-proxy-lmwhs" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:53.765842  166153 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-056213" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:55.773073  166153 pod_ready.go:94] pod "kube-scheduler-test-preload-056213" is "Ready"
	I1212 20:25:55.773130  166153 pod_ready.go:86] duration metric: took 2.0072492s for pod "kube-scheduler-test-preload-056213" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:25:55.773149  166153 pod_ready.go:40] duration metric: took 13.813891508s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:25:55.817492  166153 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1212 20:25:55.819212  166153 out.go:179] * Done! kubectl is now configured to use "test-preload-056213" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.564006227Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765571156563975740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=685767ba-ee9b-4423-8e9a-2c443f3b9301 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.564920775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dda918c2-0596-4efb-8117-4ed73daf0194 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.564994231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dda918c2-0596-4efb-8117-4ed73daf0194 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.565147472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a611955482ad17f5098bd6152348ba936a3dc11f82b917b508332b1e0be905ff,PodSandboxId:57b994299acb16af5c79594504211105d5eadc2a7acdb2837a0f2a3a5fcbf60b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765571143721391149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sd22g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a9164220f5a520a355216ef5dab95fde4c9b44699292e465434d452b92020c,PodSandboxId:415b3618814823c30b917969da98d6f920d0bbf75a6ff363ffad85dbbc8b4be8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765571140443213591,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmwhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12aa96c05fbce12282c0b33130236e8b610f35201e32b3bfc72f442f842532fd,PodSandboxId:f09e0e257fad38e0559cd29d9e37a13b8f77c14e7f238a9de7f666b27a4a0771,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765571140418198972,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f388266-4edc-4183-a5c7-50abcb1a9ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37753e456f0d6688cb4c113bf27f6de774aadd0b60cab637667c393aa5dcba2,PodSandboxId:f8d154016da93605c7f6cbbe68d3efde981886a753fe38b7717655215dd98e65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765571136267949893,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74993c5934749023fd23bd3a64d6ea3b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bad2f1c897c202f89981dfea2b7168ffecfe1896ceebb43422796cc89d4c87,PodSandboxId:ec70aff0d63e49ef835cbc9c6686be5a02fc28b94b3fabe58e3212bfeeedf073,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4
b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765571136216698139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8b5075aa45c23fd67a1b5665cb66749,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc9044f7fd7514b89829471dc412e8dd9330801c129de43e99f6b098fceb3bc8,PodSandboxId:73fc01a3f7a1121f2fefb0332e486d083926c0deac630d6d4c7361db5f03978c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765571136248245891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a095a742a932b4a7bef9ab180b42b5,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c594fdca3da79bd9d2536c554d71eac2c83a2a05eb3b1bdb84dc261c21ff1d,PodSandboxId:fbdf7a3dcda1d31de9e08aa34c68c814528d92354c185c471d8e244d4621854a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765571136186195568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06dd55fb15c38ae037afb47462cc9553,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dda918c2-0596-4efb-8117-4ed73daf0194 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.595188016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15b66071-b647-4f69-87ea-e6b47c4266e6 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.595253335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15b66071-b647-4f69-87ea-e6b47c4266e6 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.596671177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4bca9d56-c07b-43f9-8f96-c83a4f26dd5b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.597653643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765571156597588261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4bca9d56-c07b-43f9-8f96-c83a4f26dd5b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.598487106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fc463b7-2e40-4571-a895-4210cc1a1ab8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.598550522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fc463b7-2e40-4571-a895-4210cc1a1ab8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.599553682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a611955482ad17f5098bd6152348ba936a3dc11f82b917b508332b1e0be905ff,PodSandboxId:57b994299acb16af5c79594504211105d5eadc2a7acdb2837a0f2a3a5fcbf60b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765571143721391149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sd22g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a9164220f5a520a355216ef5dab95fde4c9b44699292e465434d452b92020c,PodSandboxId:415b3618814823c30b917969da98d6f920d0bbf75a6ff363ffad85dbbc8b4be8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765571140443213591,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmwhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12aa96c05fbce12282c0b33130236e8b610f35201e32b3bfc72f442f842532fd,PodSandboxId:f09e0e257fad38e0559cd29d9e37a13b8f77c14e7f238a9de7f666b27a4a0771,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765571140418198972,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f388266-4edc-4183-a5c7-50abcb1a9ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37753e456f0d6688cb4c113bf27f6de774aadd0b60cab637667c393aa5dcba2,PodSandboxId:f8d154016da93605c7f6cbbe68d3efde981886a753fe38b7717655215dd98e65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765571136267949893,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74993c5934749023fd23bd3a64d6ea3b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bad2f1c897c202f89981dfea2b7168ffecfe1896ceebb43422796cc89d4c87,PodSandboxId:ec70aff0d63e49ef835cbc9c6686be5a02fc28b94b3fabe58e3212bfeeedf073,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4
b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765571136216698139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8b5075aa45c23fd67a1b5665cb66749,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc9044f7fd7514b89829471dc412e8dd9330801c129de43e99f6b098fceb3bc8,PodSandboxId:73fc01a3f7a1121f2fefb0332e486d083926c0deac630d6d4c7361db5f03978c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765571136248245891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a095a742a932b4a7bef9ab180b42b5,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c594fdca3da79bd9d2536c554d71eac2c83a2a05eb3b1bdb84dc261c21ff1d,PodSandboxId:fbdf7a3dcda1d31de9e08aa34c68c814528d92354c185c471d8e244d4621854a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765571136186195568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06dd55fb15c38ae037afb47462cc9553,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fc463b7-2e40-4571-a895-4210cc1a1ab8 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.633075095Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0df4c09-c51f-4fc7-8241-98351610f59f name=/runtime.v1.RuntimeService/Version
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.633273522Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0df4c09-c51f-4fc7-8241-98351610f59f name=/runtime.v1.RuntimeService/Version
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.634666488Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fc9f328-8774-4619-af2c-7a9652ac5d19 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.635290695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765571156635268546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fc9f328-8774-4619-af2c-7a9652ac5d19 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.636084852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0095828b-ce52-42ec-ae84-a1eeb1e7407f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.636147202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0095828b-ce52-42ec-ae84-a1eeb1e7407f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.636296824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a611955482ad17f5098bd6152348ba936a3dc11f82b917b508332b1e0be905ff,PodSandboxId:57b994299acb16af5c79594504211105d5eadc2a7acdb2837a0f2a3a5fcbf60b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765571143721391149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sd22g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a9164220f5a520a355216ef5dab95fde4c9b44699292e465434d452b92020c,PodSandboxId:415b3618814823c30b917969da98d6f920d0bbf75a6ff363ffad85dbbc8b4be8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765571140443213591,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmwhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12aa96c05fbce12282c0b33130236e8b610f35201e32b3bfc72f442f842532fd,PodSandboxId:f09e0e257fad38e0559cd29d9e37a13b8f77c14e7f238a9de7f666b27a4a0771,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765571140418198972,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f388266-4edc-4183-a5c7-50abcb1a9ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37753e456f0d6688cb4c113bf27f6de774aadd0b60cab637667c393aa5dcba2,PodSandboxId:f8d154016da93605c7f6cbbe68d3efde981886a753fe38b7717655215dd98e65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765571136267949893,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74993c5934749023fd23bd3a64d6ea3b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bad2f1c897c202f89981dfea2b7168ffecfe1896ceebb43422796cc89d4c87,PodSandboxId:ec70aff0d63e49ef835cbc9c6686be5a02fc28b94b3fabe58e3212bfeeedf073,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4
b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765571136216698139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8b5075aa45c23fd67a1b5665cb66749,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc9044f7fd7514b89829471dc412e8dd9330801c129de43e99f6b098fceb3bc8,PodSandboxId:73fc01a3f7a1121f2fefb0332e486d083926c0deac630d6d4c7361db5f03978c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765571136248245891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a095a742a932b4a7bef9ab180b42b5,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c594fdca3da79bd9d2536c554d71eac2c83a2a05eb3b1bdb84dc261c21ff1d,PodSandboxId:fbdf7a3dcda1d31de9e08aa34c68c814528d92354c185c471d8e244d4621854a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765571136186195568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06dd55fb15c38ae037afb47462cc9553,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0095828b-ce52-42ec-ae84-a1eeb1e7407f name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.666774302Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88b879e9-65e9-4f8a-84b0-71b98b1e5e64 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.666857084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88b879e9-65e9-4f8a-84b0-71b98b1e5e64 name=/runtime.v1.RuntimeService/Version
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.668169439Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=282277b3-89a9-4583-9952-ea7a6ba0b130 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.668537124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765571156668518017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=282277b3-89a9-4583-9952-ea7a6ba0b130 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.669239960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89c65808-fbf8-4aec-af0f-0acdf0bcaaad name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.669461771Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89c65808-fbf8-4aec-af0f-0acdf0bcaaad name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 20:25:56 test-preload-056213 crio[836]: time="2025-12-12 20:25:56.669871099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a611955482ad17f5098bd6152348ba936a3dc11f82b917b508332b1e0be905ff,PodSandboxId:57b994299acb16af5c79594504211105d5eadc2a7acdb2837a0f2a3a5fcbf60b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765571143721391149,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sd22g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3a9164220f5a520a355216ef5dab95fde4c9b44699292e465434d452b92020c,PodSandboxId:415b3618814823c30b917969da98d6f920d0bbf75a6ff363ffad85dbbc8b4be8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765571140443213591,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmwhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12aa96c05fbce12282c0b33130236e8b610f35201e32b3bfc72f442f842532fd,PodSandboxId:f09e0e257fad38e0559cd29d9e37a13b8f77c14e7f238a9de7f666b27a4a0771,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765571140418198972,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f388266-4edc-4183-a5c7-50abcb1a9ef2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e37753e456f0d6688cb4c113bf27f6de774aadd0b60cab637667c393aa5dcba2,PodSandboxId:f8d154016da93605c7f6cbbe68d3efde981886a753fe38b7717655215dd98e65,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765571136267949893,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74993c5934749023fd23bd3a64d6ea3b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98bad2f1c897c202f89981dfea2b7168ffecfe1896ceebb43422796cc89d4c87,PodSandboxId:ec70aff0d63e49ef835cbc9c6686be5a02fc28b94b3fabe58e3212bfeeedf073,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4
b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765571136216698139,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8b5075aa45c23fd67a1b5665cb66749,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc9044f7fd7514b89829471dc412e8dd9330801c129de43e99f6b098fceb3bc8,PodSandboxId:73fc01a3f7a1121f2fefb0332e486d083926c0deac630d6d4c7361db5f03978c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765571136248245891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65a095a742a932b4a7bef9ab180b42b5,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c594fdca3da79bd9d2536c554d71eac2c83a2a05eb3b1bdb84dc261c21ff1d,PodSandboxId:fbdf7a3dcda1d31de9e08aa34c68c814528d92354c185c471d8e244d4621854a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765571136186195568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-056213,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06dd55fb15c38ae037afb47462cc9553,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89c65808-fbf8-4aec-af0f-0acdf0bcaaad name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	a611955482ad1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   1                   57b994299acb1       coredns-66bc5c9577-sd22g                      kube-system
	a3a9164220f5a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                1                   415b361881482       kube-proxy-lmwhs                              kube-system
	12aa96c05fbce       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   f09e0e257fad3       storage-provisioner                           kube-system
	e37753e456f0d       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago      Running             kube-controller-manager   1                   f8d154016da93       kube-controller-manager-test-preload-056213   kube-system
	cc9044f7fd751       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago      Running             kube-scheduler            1                   73fc01a3f7a11       kube-scheduler-test-preload-056213            kube-system
	98bad2f1c897c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      1                   ec70aff0d63e4       etcd-test-preload-056213                      kube-system
	97c594fdca3da       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago      Running             kube-apiserver            1                   fbdf7a3dcda1d       kube-apiserver-test-preload-056213            kube-system
	
	
	==> coredns [a611955482ad17f5098bd6152348ba936a3dc11f82b917b508332b1e0be905ff] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58742 - 50495 "HINFO IN 3340204770030648117.3801351683110808717. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046158922s
	
	
	==> describe nodes <==
	Name:               test-preload-056213
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-056213
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=test-preload-056213
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_24_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:24:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-056213
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:25:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:25:40 +0000   Fri, 12 Dec 2025 20:24:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:25:40 +0000   Fri, 12 Dec 2025 20:24:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:25:40 +0000   Fri, 12 Dec 2025 20:24:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:25:40 +0000   Fri, 12 Dec 2025 20:25:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    test-preload-056213
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 dee7524e8494490fb85f33872852442c
	  System UUID:                dee7524e-8494-490f-b85f-33872852442c
	  Boot ID:                    2bed8d78-4bd4-4034-b530-bd650afaa71b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-sd22g                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     90s
	  kube-system                 etcd-test-preload-056213                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         96s
	  kube-system                 kube-apiserver-test-preload-056213             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-test-preload-056213    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-lmwhs                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-test-preload-056213             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 89s                  kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Normal   Starting                 103s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node test-preload-056213 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node test-preload-056213 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node test-preload-056213 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    96s                  kubelet          Node test-preload-056213 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  96s                  kubelet          Node test-preload-056213 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     96s                  kubelet          Node test-preload-056213 status is now: NodeHasSufficientPID
	  Normal   Starting                 96s                  kubelet          Starting kubelet.
	  Normal   NodeReady                95s                  kubelet          Node test-preload-056213 status is now: NodeReady
	  Normal   RegisteredNode           92s                  node-controller  Node test-preload-056213 event: Registered Node test-preload-056213 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21s (x8 over 22s)    kubelet          Node test-preload-056213 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 22s)    kubelet          Node test-preload-056213 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 22s)    kubelet          Node test-preload-056213 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-056213 has been rebooted, boot id: 2bed8d78-4bd4-4034-b530-bd650afaa71b
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-056213 event: Registered Node test-preload-056213 in Controller
	
	
	==> dmesg <==
	[Dec12 20:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001330] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006137] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.017264] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.110669] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.091270] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.507733] kauditd_printk_skb: 168 callbacks suppressed
	[  +8.225586] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [98bad2f1c897c202f89981dfea2b7168ffecfe1896ceebb43422796cc89d4c87] <==
	{"level":"warn","ts":"2025-12-12T20:25:38.090327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.111113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.120567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.132945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.141804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.162069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.176723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.190549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.203014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.215572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.236870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.243715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.255434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.263981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.275770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.289638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.307681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.331462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.340047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.351758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.365936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.371203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.383036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.392972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:25:38.442943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41630","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:25:56 up 0 min,  0 users,  load average: 1.00, 0.27, 0.09
	Linux test-preload-056213 6.6.95 #1 SMP PREEMPT_DYNAMIC Fri Dec 12 05:38:44 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [97c594fdca3da79bd9d2536c554d71eac2c83a2a05eb3b1bdb84dc261c21ff1d] <==
	I1212 20:25:39.184091       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 20:25:39.188168       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 20:25:39.190778       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 20:25:39.197448       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 20:25:39.198817       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 20:25:39.203530       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1212 20:25:39.207861       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 20:25:39.222492       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 20:25:39.222527       1 policy_source.go:240] refreshing policies
	I1212 20:25:39.222531       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 20:25:39.222568       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 20:25:39.222576       1 aggregator.go:171] initial CRD sync complete...
	I1212 20:25:39.222582       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 20:25:39.222587       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 20:25:39.222622       1 cache.go:39] Caches are synced for autoregister controller
	I1212 20:25:39.244997       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 20:25:39.945323       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 20:25:40.096571       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 20:25:40.729876       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 20:25:40.776237       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1212 20:25:40.803968       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 20:25:40.815225       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 20:25:42.537940       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 20:25:42.737051       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 20:25:42.936957       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e37753e456f0d6688cb4c113bf27f6de774aadd0b60cab637667c393aa5dcba2] <==
	I1212 20:25:42.550164       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:25:42.552240       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 20:25:42.556402       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 20:25:42.561715       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 20:25:42.562751       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1212 20:25:42.567067       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1212 20:25:42.567164       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 20:25:42.571375       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 20:25:42.573683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 20:25:42.574733       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 20:25:42.578014       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 20:25:42.582078       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:25:42.582087       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 20:25:42.583228       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 20:25:42.583322       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 20:25:42.583428       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 20:25:42.583527       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 20:25:42.584692       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 20:25:42.584774       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1212 20:25:42.585874       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 20:25:42.589111       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:25:42.593267       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 20:25:42.593340       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 20:25:42.593386       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-056213"
	I1212 20:25:42.593426       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [a3a9164220f5a520a355216ef5dab95fde4c9b44699292e465434d452b92020c] <==
	I1212 20:25:40.637805       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:25:40.738026       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:25:40.738072       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.204"]
	E1212 20:25:40.738170       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:25:40.799918       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1212 20:25:40.799988       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 20:25:40.800009       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:25:40.809172       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:25:40.810318       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:25:40.810337       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:25:40.818130       1 config.go:200] "Starting service config controller"
	I1212 20:25:40.818215       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:25:40.818335       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:25:40.818357       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:25:40.818423       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:25:40.818430       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:25:40.819477       1 config.go:309] "Starting node config controller"
	I1212 20:25:40.819507       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:25:40.919000       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 20:25:40.919055       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:25:40.919100       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:25:40.919762       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [cc9044f7fd7514b89829471dc412e8dd9330801c129de43e99f6b098fceb3bc8] <==
	I1212 20:25:37.836680       1 serving.go:386] Generated self-signed cert in-memory
	W1212 20:25:39.129789       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1212 20:25:39.131213       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1212 20:25:39.131571       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1212 20:25:39.131659       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1212 20:25:39.169853       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 20:25:39.169933       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:25:39.172285       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:25:39.172306       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:25:39.172766       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:25:39.172816       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:25:39.273462       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: E1212 20:25:39.272996    1184 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-056213\" already exists" pod="kube-system/kube-scheduler-test-preload-056213"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: I1212 20:25:39.273022    1184 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-056213"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: E1212 20:25:39.281467    1184 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-056213\" already exists" pod="kube-system/etcd-test-preload-056213"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: I1212 20:25:39.281505    1184 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-056213"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: E1212 20:25:39.288905    1184 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-056213\" already exists" pod="kube-system/kube-apiserver-test-preload-056213"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: I1212 20:25:39.288928    1184 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-056213"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: E1212 20:25:39.296815    1184 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-056213\" already exists" pod="kube-system/kube-controller-manager-test-preload-056213"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: I1212 20:25:39.919472    1184 apiserver.go:52] "Watching apiserver"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: E1212 20:25:39.923370    1184 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-sd22g" podUID="8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: I1212 20:25:39.937682    1184 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: I1212 20:25:39.943019    1184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9f388266-4edc-4183-a5c7-50abcb1a9ef2-tmp\") pod \"storage-provisioner\" (UID: \"9f388266-4edc-4183-a5c7-50abcb1a9ef2\") " pod="kube-system/storage-provisioner"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: I1212 20:25:39.943353    1184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d-lib-modules\") pod \"kube-proxy-lmwhs\" (UID: \"22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d\") " pod="kube-system/kube-proxy-lmwhs"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: I1212 20:25:39.943379    1184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d-xtables-lock\") pod \"kube-proxy-lmwhs\" (UID: \"22a24a0b-fa1a-42d1-a7c7-00c4f4deed5d\") " pod="kube-system/kube-proxy-lmwhs"
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: E1212 20:25:39.944461    1184 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:25:39 test-preload-056213 kubelet[1184]: E1212 20:25:39.944542    1184 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9-config-volume podName:8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9 nodeName:}" failed. No retries permitted until 2025-12-12 20:25:40.444521036 +0000 UTC m=+5.621356720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9-config-volume") pod "coredns-66bc5c9577-sd22g" (UID: "8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9") : object "kube-system"/"coredns" not registered
	Dec 12 20:25:40 test-preload-056213 kubelet[1184]: E1212 20:25:40.448030    1184 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:25:40 test-preload-056213 kubelet[1184]: E1212 20:25:40.448109    1184 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9-config-volume podName:8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9 nodeName:}" failed. No retries permitted until 2025-12-12 20:25:41.448095021 +0000 UTC m=+6.624930702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9-config-volume") pod "coredns-66bc5c9577-sd22g" (UID: "8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9") : object "kube-system"/"coredns" not registered
	Dec 12 20:25:40 test-preload-056213 kubelet[1184]: I1212 20:25:40.933768    1184 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 12 20:25:41 test-preload-056213 kubelet[1184]: E1212 20:25:41.457227    1184 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 20:25:41 test-preload-056213 kubelet[1184]: E1212 20:25:41.457332    1184 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9-config-volume podName:8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9 nodeName:}" failed. No retries permitted until 2025-12-12 20:25:43.457316371 +0000 UTC m=+8.634152056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9-config-volume") pod "coredns-66bc5c9577-sd22g" (UID: "8d52b1b5-b8dc-4e5b-835f-04d9ea2991f9") : object "kube-system"/"coredns" not registered
	Dec 12 20:25:44 test-preload-056213 kubelet[1184]: E1212 20:25:44.997458    1184 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765571144995680121 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 12 20:25:44 test-preload-056213 kubelet[1184]: E1212 20:25:44.997477    1184 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765571144995680121 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 12 20:25:51 test-preload-056213 kubelet[1184]: I1212 20:25:51.749519    1184 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 12 20:25:55 test-preload-056213 kubelet[1184]: E1212 20:25:54.999893    1184 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765571154999581967 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 12 20:25:55 test-preload-056213 kubelet[1184]: E1212 20:25:54.999930    1184 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765571154999581967 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [12aa96c05fbce12282c0b33130236e8b610f35201e32b3bfc72f442f842532fd] <==
	I1212 20:25:40.532863       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-056213 -n test-preload-056213
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-056213 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-056213" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-056213
--- FAIL: TestPreload (144.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-455927 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-455927 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.104640981s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-455927] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-455927" primary control-plane node in "pause-455927" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-455927" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:32:05.054453  171452 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:32:05.054811  171452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:05.054825  171452 out.go:374] Setting ErrFile to fd 2...
	I1212 20:32:05.054833  171452 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:05.055168  171452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:32:05.055730  171452 out.go:368] Setting JSON to false
	I1212 20:32:05.056996  171452 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8065,"bootTime":1765563460,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:32:05.057077  171452 start.go:143] virtualization: kvm guest
	I1212 20:32:05.058949  171452 out.go:179] * [pause-455927] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:32:05.060571  171452 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:32:05.060608  171452 notify.go:221] Checking for updates...
	I1212 20:32:05.062439  171452 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:32:05.063558  171452 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 20:32:05.064654  171452 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 20:32:05.065603  171452 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:32:05.067644  171452 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:32:05.069275  171452 config.go:182] Loaded profile config "pause-455927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:05.070021  171452 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:32:05.116587  171452 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 20:32:05.117521  171452 start.go:309] selected driver: kvm2
	I1212 20:32:05.117539  171452 start.go:927] validating driver "kvm2" against &{Name:pause-455927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:32:05.117695  171452 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:32:05.119001  171452 cni.go:84] Creating CNI manager for ""
	I1212 20:32:05.119076  171452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:32:05.119146  171452 start.go:353] cluster config:
	{Name:pause-455927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-455927 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:32:05.119299  171452 iso.go:125] acquiring lock: {Name:mka604e7c5a779b48764eb6b2b4a8a1c6683346a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:32:05.120564  171452 out.go:179] * Starting "pause-455927" primary control-plane node in "pause-455927" cluster
	I1212 20:32:05.121396  171452 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:32:05.121425  171452 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 20:32:05.121433  171452 cache.go:65] Caching tarball of preloaded images
	I1212 20:32:05.121508  171452 preload.go:238] Found /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:32:05.121519  171452 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:32:05.121624  171452 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/config.json ...
	I1212 20:32:05.121837  171452 start.go:360] acquireMachinesLock for pause-455927: {Name:mk1985c179f459a7b1b82780fe7717dfacfba5d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:32:19.115479  171452 start.go:364] duration metric: took 13.993599104s to acquireMachinesLock for "pause-455927"
	I1212 20:32:19.115551  171452 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:32:19.115560  171452 fix.go:54] fixHost starting: 
	I1212 20:32:19.117865  171452 fix.go:112] recreateIfNeeded on pause-455927: state=Running err=<nil>
	W1212 20:32:19.117906  171452 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:32:19.119424  171452 out.go:252] * Updating the running kvm2 "pause-455927" VM ...
	I1212 20:32:19.119459  171452 machine.go:94] provisionDockerMachine start ...
	I1212 20:32:19.124099  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.124802  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:19.124836  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.125316  171452 main.go:143] libmachine: Using SSH client type: native
	I1212 20:32:19.125611  171452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 20:32:19.125640  171452 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:32:19.236657  171452 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-455927
	
	I1212 20:32:19.236699  171452 buildroot.go:166] provisioning hostname "pause-455927"
	I1212 20:32:19.240397  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.240917  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:19.240950  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.241200  171452 main.go:143] libmachine: Using SSH client type: native
	I1212 20:32:19.241456  171452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 20:32:19.241481  171452 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-455927 && echo "pause-455927" | sudo tee /etc/hostname
	I1212 20:32:19.374789  171452 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-455927
	
	I1212 20:32:19.378268  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.378759  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:19.378791  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.378986  171452 main.go:143] libmachine: Using SSH client type: native
	I1212 20:32:19.379260  171452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 20:32:19.379286  171452 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-455927' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-455927/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-455927' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:32:19.486341  171452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:32:19.486379  171452 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22112-135957/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-135957/.minikube}
	I1212 20:32:19.486432  171452 buildroot.go:174] setting up certificates
	I1212 20:32:19.486455  171452 provision.go:84] configureAuth start
	I1212 20:32:19.489759  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.490239  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:19.490273  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.492917  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.493468  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:19.493516  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.493735  171452 provision.go:143] copyHostCerts
	I1212 20:32:19.493798  171452 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-135957/.minikube/key.pem, removing ...
	I1212 20:32:19.493813  171452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-135957/.minikube/key.pem
	I1212 20:32:19.493872  171452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/key.pem (1675 bytes)
	I1212 20:32:19.493992  171452 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-135957/.minikube/ca.pem, removing ...
	I1212 20:32:19.494003  171452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.pem
	I1212 20:32:19.494027  171452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/ca.pem (1078 bytes)
	I1212 20:32:19.494096  171452 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-135957/.minikube/cert.pem, removing ...
	I1212 20:32:19.494124  171452 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-135957/.minikube/cert.pem
	I1212 20:32:19.494155  171452 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-135957/.minikube/cert.pem (1123 bytes)
	I1212 20:32:19.494231  171452 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem org=jenkins.pause-455927 san=[127.0.0.1 192.168.72.217 localhost minikube pause-455927]
	I1212 20:32:19.564594  171452 provision.go:177] copyRemoteCerts
	I1212 20:32:19.564679  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:32:19.568524  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.569259  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:19.569303  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.569551  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:19.661659  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:32:19.692310  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 20:32:19.722187  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 20:32:19.763134  171452 provision.go:87] duration metric: took 276.636199ms to configureAuth
	I1212 20:32:19.763176  171452 buildroot.go:189] setting minikube options for container-runtime
	I1212 20:32:19.763436  171452 config.go:182] Loaded profile config "pause-455927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:19.766537  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.766914  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:19.766946  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:19.767121  171452 main.go:143] libmachine: Using SSH client type: native
	I1212 20:32:19.767399  171452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 20:32:19.767431  171452 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:32:25.333431  171452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:32:25.333472  171452 machine.go:97] duration metric: took 6.213989836s to provisionDockerMachine
	I1212 20:32:25.333490  171452 start.go:293] postStartSetup for "pause-455927" (driver="kvm2")
	I1212 20:32:25.333533  171452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:32:25.333645  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:32:25.337074  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.337565  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.337592  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.337775  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:25.420189  171452 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:32:25.425973  171452 info.go:137] Remote host: Buildroot 2025.02
	I1212 20:32:25.426006  171452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/addons for local assets ...
	I1212 20:32:25.426085  171452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/files for local assets ...
	I1212 20:32:25.426218  171452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem -> 1399952.pem in /etc/ssl/certs
	I1212 20:32:25.426333  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:32:25.438070  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:32:25.468983  171452 start.go:296] duration metric: took 135.474539ms for postStartSetup
	I1212 20:32:25.469029  171452 fix.go:56] duration metric: took 6.353471123s for fixHost
	I1212 20:32:25.472824  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.473399  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.473438  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.473665  171452 main.go:143] libmachine: Using SSH client type: native
	I1212 20:32:25.473914  171452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 20:32:25.473927  171452 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1212 20:32:25.645611  171452 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765571545.641052606
	
	I1212 20:32:25.645646  171452 fix.go:216] guest clock: 1765571545.641052606
	I1212 20:32:25.645657  171452 fix.go:229] Guest: 2025-12-12 20:32:25.641052606 +0000 UTC Remote: 2025-12-12 20:32:25.469033507 +0000 UTC m=+20.483050535 (delta=172.019099ms)
	I1212 20:32:25.645681  171452 fix.go:200] guest clock delta is within tolerance: 172.019099ms
	I1212 20:32:25.645689  171452 start.go:83] releasing machines lock for "pause-455927", held for 6.530163251s
	I1212 20:32:25.649384  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.649926  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.649968  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.650584  171452 ssh_runner.go:195] Run: cat /version.json
	I1212 20:32:25.650718  171452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:32:25.654367  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.654533  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.654844  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.654875  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.655058  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:25.655072  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.655143  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.655321  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:25.842718  171452 ssh_runner.go:195] Run: systemctl --version
	I1212 20:32:25.860014  171452 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:32:26.064692  171452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:32:26.073724  171452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:32:26.073838  171452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:32:26.085666  171452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:32:26.085697  171452 start.go:496] detecting cgroup driver to use...
	I1212 20:32:26.085769  171452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:32:26.106590  171452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:32:26.125624  171452 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:32:26.125713  171452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:32:26.147638  171452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:32:26.163699  171452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:32:26.393067  171452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:32:26.595189  171452 docker.go:234] disabling docker service ...
	I1212 20:32:26.595269  171452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:32:26.636704  171452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:32:26.652874  171452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:32:26.940440  171452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:32:27.311497  171452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:32:27.336790  171452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:32:27.382693  171452 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:32:27.382768  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.410604  171452 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:32:27.410686  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.439715  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.472842  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.523514  171452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:32:27.551881  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.575647  171452 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.600517  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.631432  171452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:32:27.676899  171452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:32:27.704677  171452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:32:28.018140  171452 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:32:28.741066  171452 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:32:28.741192  171452 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:32:28.747422  171452 start.go:564] Will wait 60s for crictl version
	I1212 20:32:28.747503  171452 ssh_runner.go:195] Run: which crictl
	I1212 20:32:28.752239  171452 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:32:28.787287  171452 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 20:32:28.787390  171452 ssh_runner.go:195] Run: crio --version
	I1212 20:32:28.827330  171452 ssh_runner.go:195] Run: crio --version
	I1212 20:32:28.869558  171452 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1212 20:32:28.873945  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:28.874455  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:28.874485  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:28.874702  171452 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 20:32:28.879956  171452 kubeadm.go:884] updating cluster {Name:pause-455927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:32:28.880092  171452 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:32:28.880151  171452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:32:28.929250  171452 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:32:28.929280  171452 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:32:28.929335  171452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:32:28.963072  171452 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:32:28.963101  171452 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:32:28.963124  171452 kubeadm.go:935] updating node { 192.168.72.217 8443 v1.34.2 crio true true} ...
	I1212 20:32:28.963253  171452 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-455927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:32:28.963346  171452 ssh_runner.go:195] Run: crio config
	I1212 20:32:29.015486  171452 cni.go:84] Creating CNI manager for ""
	I1212 20:32:29.015516  171452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:32:29.015539  171452 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:32:29.015577  171452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.217 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-455927 NodeName:pause-455927 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:32:29.015723  171452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-455927"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:32:29.015788  171452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:32:29.029071  171452 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:32:29.029174  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:32:29.041710  171452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1212 20:32:29.067137  171452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:32:29.090616  171452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1212 20:32:29.111607  171452 ssh_runner.go:195] Run: grep 192.168.72.217	control-plane.minikube.internal$ /etc/hosts
	I1212 20:32:29.116128  171452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:32:29.327166  171452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:32:29.349123  171452 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927 for IP: 192.168.72.217
	I1212 20:32:29.349152  171452 certs.go:195] generating shared ca certs ...
	I1212 20:32:29.349175  171452 certs.go:227] acquiring lock for ca certs: {Name:mk856e15c7830c27b8e705838c72180e3414c0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:29.349389  171452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key
	I1212 20:32:29.349471  171452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key
	I1212 20:32:29.349495  171452 certs.go:257] generating profile certs ...
	I1212 20:32:29.349634  171452 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/client.key
	I1212 20:32:29.349735  171452 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/apiserver.key.96be7686
	I1212 20:32:29.349799  171452 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/proxy-client.key
	I1212 20:32:29.349956  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem (1338 bytes)
	W1212 20:32:29.350014  171452 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995_empty.pem, impossibly tiny 0 bytes
	I1212 20:32:29.350025  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:32:29.350071  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:32:29.350120  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:32:29.350155  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem (1675 bytes)
	I1212 20:32:29.350216  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:32:29.351798  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:32:29.391274  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:32:29.433519  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:32:29.471577  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:32:29.505316  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 20:32:29.539859  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:32:29.582259  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:32:29.619161  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 20:32:29.657802  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /usr/share/ca-certificates/1399952.pem (1708 bytes)
	I1212 20:32:29.700212  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:32:29.813537  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem --> /usr/share/ca-certificates/139995.pem (1338 bytes)
	I1212 20:32:29.880999  171452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:32:29.917590  171452 ssh_runner.go:195] Run: openssl version
	I1212 20:32:29.930348  171452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1399952.pem
	I1212 20:32:29.954872  171452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1399952.pem /etc/ssl/certs/1399952.pem
	I1212 20:32:29.980443  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1399952.pem
	I1212 20:32:29.996827  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:43 /usr/share/ca-certificates/1399952.pem
	I1212 20:32:29.996923  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1399952.pem
	I1212 20:32:30.015024  171452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:32:30.060562  171452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.087216  171452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:32:30.101086  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.114304  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.114390  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.131600  171452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:32:30.151234  171452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.170011  171452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/139995.pem /etc/ssl/certs/139995.pem
	I1212 20:32:30.188758  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.198411  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:43 /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.198487  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.209719  171452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:32:30.232214  171452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:32:30.246171  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:32:30.263093  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:32:30.277000  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:32:30.289474  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:32:30.306371  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:32:30.326209  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:32:30.341602  171452 kubeadm.go:401] StartCluster: {Name:pause-455927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:32:30.341715  171452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:32:30.341793  171452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:32:30.407887  171452 cri.go:89] found id: "043959b7ec0676c5a46d24c6b3780b50a2ceec833a0baaa4e6e73acf1c3f2bf8"
	I1212 20:32:30.407921  171452 cri.go:89] found id: "e23dc1ae63a645bb3abf7ae15578ebd6e7104d9d8742579b8a4cd30f2880abb4"
	I1212 20:32:30.407927  171452 cri.go:89] found id: "aa17ca435607599d6287a54daf67f4b7a8658cd7f3594922d070df6c466efddd"
	I1212 20:32:30.407932  171452 cri.go:89] found id: "81a956b8db9946c63b7866caebf25c7ce9dd581d6260f913e7d4ba350ab8e284"
	I1212 20:32:30.407937  171452 cri.go:89] found id: "5232e76d229f8b27af5a043e2647c18c747064e99032f15a681f847f707b3929"
	I1212 20:32:30.407942  171452 cri.go:89] found id: "a41a2dd4009cc2e8f86406abc78371ba81607f6a718be8d5c6df050398f9e087"
	I1212 20:32:30.407947  171452 cri.go:89] found id: "746ddb3f8d9647e92c04d17355872276418fb9cc02eb1e77265d243ac56e8f7d"
	I1212 20:32:30.407952  171452 cri.go:89] found id: "e7b65b13232f2f7116344810fb35a8fcb4b7fa3b2494fa8a3afb8580b9a20436"
	I1212 20:32:30.407956  171452 cri.go:89] found id: "5b7d3025e166d04be96b4f90a1166e90c80e0e86e9ced791a4e5e1bfe0ae17ca"
	I1212 20:32:30.407979  171452 cri.go:89] found id: ""
	I1212 20:32:30.408029  171452 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-455927 -n pause-455927
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-455927 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-455927 logs -n 25: (1.347790305s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-873824 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                       │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                      │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                      │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                       │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl cat docker --no-pager                                                                                                                                                                                       │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /etc/docker/daemon.json                                                                                                                                                                                           │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo docker system info                                                                                                                                                                                                    │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                   │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo containerd config dump                                                                                                                                                                                                │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo crio config                                                                                                                                                                                                           │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ delete  │ -p cilium-873824                                                                                                                                                                                                                            │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │ 12 Dec 25 20:32 UTC │
	│ start   │ -p cert-options-992051 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-992051      │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ delete  │ -p force-systemd-env-370330                                                                                                                                                                                                                 │ force-systemd-env-370330 │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │ 12 Dec 25 20:32 UTC │
	│ start   │ -p old-k8s-version-202994 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-202994   │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:32:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:32:24.011464  173420 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:32:24.011587  173420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:24.011592  173420 out.go:374] Setting ErrFile to fd 2...
	I1212 20:32:24.011597  173420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:24.011801  173420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:32:24.012350  173420 out.go:368] Setting JSON to false
	I1212 20:32:24.013286  173420 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8084,"bootTime":1765563460,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:32:24.013347  173420 start.go:143] virtualization: kvm guest
	I1212 20:32:24.015453  173420 out.go:179] * [old-k8s-version-202994] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:32:24.016659  173420 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:32:24.016714  173420 notify.go:221] Checking for updates...
	I1212 20:32:24.018854  173420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:32:24.020040  173420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 20:32:24.021120  173420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 20:32:24.022168  173420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:32:24.023231  173420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:32:24.024733  173420 config.go:182] Loaded profile config "cert-expiration-391329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:24.024820  173420 config.go:182] Loaded profile config "cert-options-992051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:24.024887  173420 config.go:182] Loaded profile config "guest-095861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1212 20:32:24.024995  173420 config.go:182] Loaded profile config "pause-455927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:24.025085  173420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:32:24.065210  173420 out.go:179] * Using the kvm2 driver based on user configuration
	I1212 20:32:24.066818  173420 start.go:309] selected driver: kvm2
	I1212 20:32:24.066841  173420 start.go:927] validating driver "kvm2" against <nil>
	I1212 20:32:24.066859  173420 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:32:24.068091  173420 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:32:24.068522  173420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:32:24.068566  173420 cni.go:84] Creating CNI manager for ""
	I1212 20:32:24.068638  173420 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:32:24.068656  173420 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 20:32:24.068720  173420 start.go:353] cluster config:
	{Name:old-k8s-version-202994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-202994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:32:24.068871  173420 iso.go:125] acquiring lock: {Name:mka604e7c5a779b48764eb6b2b4a8a1c6683346a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:32:24.071904  173420 out.go:179] * Starting "old-k8s-version-202994" primary control-plane node in "old-k8s-version-202994" cluster
	I1212 20:32:22.301011  170878 crio.go:462] duration metric: took 1.388158221s to copy over tarball
	I1212 20:32:22.301085  170878 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 20:32:23.875552  170878 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.574442174s)
	I1212 20:32:23.875572  170878 crio.go:469] duration metric: took 1.574538056s to extract the tarball
	I1212 20:32:23.875580  170878 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 20:32:23.913005  170878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:32:23.952246  170878 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:32:23.952259  170878 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:32:23.952266  170878 kubeadm.go:935] updating node { 192.168.61.15 8443 v1.34.2 crio true true} ...
	I1212 20:32:23.952352  170878 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-391329 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-391329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:32:23.952407  170878 ssh_runner.go:195] Run: crio config
	I1212 20:32:24.005240  170878 cni.go:84] Creating CNI manager for ""
	I1212 20:32:24.005263  170878 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:32:24.005291  170878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:32:24.005318  170878 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.15 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-391329 NodeName:cert-expiration-391329 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:32:24.005476  170878 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-391329"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:32:24.005566  170878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:32:24.018642  170878 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:32:24.018694  170878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:32:24.035446  170878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1212 20:32:24.058701  170878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:32:24.080496  170878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1212 20:32:24.100734  170878 ssh_runner.go:195] Run: grep 192.168.61.15	control-plane.minikube.internal$ /etc/hosts
	I1212 20:32:24.104951  170878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:32:24.119149  170878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:32:24.267244  170878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:32:24.303572  170878 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329 for IP: 192.168.61.15
	I1212 20:32:24.303595  170878 certs.go:195] generating shared ca certs ...
	I1212 20:32:24.303663  170878 certs.go:227] acquiring lock for ca certs: {Name:mk856e15c7830c27b8e705838c72180e3414c0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.303958  170878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key
	I1212 20:32:24.304060  170878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key
	I1212 20:32:24.304073  170878 certs.go:257] generating profile certs ...
	I1212 20:32:24.304201  170878 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.key
	I1212 20:32:24.304232  170878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.crt with IP's: []
	I1212 20:32:24.379004  170878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.crt ...
	I1212 20:32:24.379021  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.crt: {Name:mkb6dbe0f4a7cba5de84cc75c7603a05b1c33d65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.379211  170878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.key ...
	I1212 20:32:24.379221  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.key: {Name:mkd14ab4fc5a7afaf67b9989515995075a8d3785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.379297  170878 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key.36193736
	I1212 20:32:24.379307  170878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt.36193736 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.15]
	I1212 20:32:24.459749  170878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt.36193736 ...
	I1212 20:32:24.459766  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt.36193736: {Name:mkc662c886000767a7485ffd553b7f20e3ebe7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.459936  170878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key.36193736 ...
	I1212 20:32:24.459944  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key.36193736: {Name:mk00ccb7bab9e54643f4b989bfadaaf1d639376d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.460014  170878 certs.go:382] copying /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt.36193736 -> /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt
	I1212 20:32:24.460082  170878 certs.go:386] copying /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key.36193736 -> /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key
	I1212 20:32:24.460153  170878 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.key
	I1212 20:32:24.460164  170878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.crt with IP's: []
	I1212 20:32:24.523166  170878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.crt ...
	I1212 20:32:24.523185  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.crt: {Name:mke03192104ef709d025614ee3d02aca2026c5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.523364  170878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.key ...
	I1212 20:32:24.523375  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.key: {Name:mkc47887b7e10b4313f8a0f7b9cc50282cf9c8ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.523561  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem (1338 bytes)
	W1212 20:32:24.523598  170878 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995_empty.pem, impossibly tiny 0 bytes
	I1212 20:32:24.523605  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:32:24.523627  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:32:24.523647  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:32:24.523664  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem (1675 bytes)
	I1212 20:32:24.523701  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:32:24.524292  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:32:24.554848  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:32:24.583821  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:32:24.614181  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:32:24.643464  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 20:32:24.672709  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:32:24.704906  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:32:24.737296  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:32:24.771095  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /usr/share/ca-certificates/1399952.pem (1708 bytes)
	I1212 20:32:24.800082  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:32:24.828469  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem --> /usr/share/ca-certificates/139995.pem (1338 bytes)
	I1212 20:32:24.857298  170878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:32:24.877294  170878 ssh_runner.go:195] Run: openssl version
	I1212 20:32:24.883422  170878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1399952.pem
	I1212 20:32:24.895031  170878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1399952.pem /etc/ssl/certs/1399952.pem
	I1212 20:32:24.906679  170878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1399952.pem
	I1212 20:32:24.911829  170878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:43 /usr/share/ca-certificates/1399952.pem
	I1212 20:32:24.911891  170878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1399952.pem
	I1212 20:32:24.918904  170878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:32:24.930559  170878 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1399952.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:32:24.942363  170878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:24.954334  170878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:32:24.968136  170878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:24.974678  170878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:24.974745  170878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:24.986323  170878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:32:25.003035  170878 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:32:25.017134  170878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/139995.pem
	I1212 20:32:25.032866  170878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/139995.pem /etc/ssl/certs/139995.pem
	I1212 20:32:25.047709  170878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139995.pem
	I1212 20:32:25.052959  170878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:43 /usr/share/ca-certificates/139995.pem
	I1212 20:32:25.053027  170878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139995.pem
	I1212 20:32:25.060745  170878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:32:25.074931  170878 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/139995.pem /etc/ssl/certs/51391683.0
	I1212 20:32:25.089595  170878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:32:25.096043  170878 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:32:25.096104  170878 kubeadm.go:401] StartCluster: {Name:cert-expiration-391329 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:cert-expiration-391329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:32:25.096192  170878 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:32:25.096248  170878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:32:25.139205  170878 cri.go:89] found id: ""
	I1212 20:32:25.139292  170878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:32:25.151191  170878 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:32:25.163260  170878 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:32:25.176936  170878 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:32:25.176947  170878 kubeadm.go:158] found existing configuration files:
	
	I1212 20:32:25.176992  170878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:32:25.188374  170878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:32:25.188452  170878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:32:25.202378  170878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:32:25.215920  170878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:32:25.215993  170878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:32:25.228084  170878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:32:25.238896  170878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:32:25.238965  170878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:32:25.254940  170878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:32:25.266528  170878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:32:25.266579  170878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:32:25.278675  170878 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 20:32:25.328625  170878 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:32:25.328726  170878 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:32:25.423283  170878 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:32:25.423403  170878 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:32:25.423509  170878 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:32:25.436036  170878 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:32:25.645779  173294 start.go:364] duration metric: took 8.067588009s to acquireMachinesLock for "cert-options-992051"
	I1212 20:32:25.645846  173294 start.go:93] Provisioning new machine with config: &{Name:cert-options-992051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.34.2 ClusterName:cert-options-992051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:32:25.645957  173294 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 20:32:25.437640  170878 out.go:252]   - Generating certificates and keys ...
	I1212 20:32:25.437729  170878 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:32:25.437819  170878 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:32:25.679635  170878 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:32:26.180223  170878 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:32:26.398835  170878 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:32:26.589044  170878 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:32:26.943572  170878 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:32:26.943797  170878 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-391329 localhost] and IPs [192.168.61.15 127.0.0.1 ::1]
	I1212 20:32:27.030537  170878 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:32:27.030667  170878 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-391329 localhost] and IPs [192.168.61.15 127.0.0.1 ::1]
	I1212 20:32:25.647776  173294 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1212 20:32:25.648027  173294 start.go:159] libmachine.API.Create for "cert-options-992051" (driver="kvm2")
	I1212 20:32:25.648064  173294 client.go:173] LocalClient.Create starting
	I1212 20:32:25.648151  173294 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem
	I1212 20:32:25.648200  173294 main.go:143] libmachine: Decoding PEM data...
	I1212 20:32:25.648226  173294 main.go:143] libmachine: Parsing certificate...
	I1212 20:32:25.648289  173294 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem
	I1212 20:32:25.648312  173294 main.go:143] libmachine: Decoding PEM data...
	I1212 20:32:25.648325  173294 main.go:143] libmachine: Parsing certificate...
	I1212 20:32:25.648806  173294 main.go:143] libmachine: creating domain...
	I1212 20:32:25.648815  173294 main.go:143] libmachine: creating network...
	I1212 20:32:25.650651  173294 main.go:143] libmachine: found existing default network
	I1212 20:32:25.651083  173294 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1212 20:32:25.652634  173294 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bb5010}
	I1212 20:32:25.652817  173294 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-cert-options-992051</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1212 20:32:25.659769  173294 main.go:143] libmachine: creating private network mk-cert-options-992051 192.168.39.0/24...
	I1212 20:32:25.744922  173294 main.go:143] libmachine: private network mk-cert-options-992051 192.168.39.0/24 created
	I1212 20:32:25.745239  173294 main.go:143] libmachine: <network>
	  <name>mk-cert-options-992051</name>
	  <uuid>03d84525-441d-47d3-b610-9c7cc6186d86</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:f0:25:19'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1212 20:32:25.745273  173294 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051 ...
	I1212 20:32:25.745316  173294 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22112-135957/.minikube/cache/iso/amd64/minikube-v1.37.0-1765505725-22112-amd64.iso
	I1212 20:32:25.745325  173294 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 20:32:25.745400  173294 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22112-135957/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22112-135957/.minikube/cache/iso/amd64/minikube-v1.37.0-1765505725-22112-amd64.iso...
	I1212 20:32:26.350296  173294 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/id_rsa...
	I1212 20:32:26.505273  173294 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/cert-options-992051.rawdisk...
	I1212 20:32:26.505311  173294 main.go:143] libmachine: Writing magic tar header
	I1212 20:32:26.505337  173294 main.go:143] libmachine: Writing SSH key tar header
	I1212 20:32:26.505412  173294 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051 ...
	I1212 20:32:26.505465  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051
	I1212 20:32:26.505498  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051 (perms=drwx------)
	I1212 20:32:26.505510  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube/machines
	I1212 20:32:26.505518  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube/machines (perms=drwxr-xr-x)
	I1212 20:32:26.505528  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 20:32:26.505535  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube (perms=drwxr-xr-x)
	I1212 20:32:26.505542  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957
	I1212 20:32:26.505549  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957 (perms=drwxrwxr-x)
	I1212 20:32:26.505556  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1212 20:32:26.505563  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 20:32:26.505569  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1212 20:32:26.505575  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 20:32:26.505582  173294 main.go:143] libmachine: checking permissions on dir: /home
	I1212 20:32:26.505588  173294 main.go:143] libmachine: skipping /home - not owner
	I1212 20:32:26.505590  173294 main.go:143] libmachine: defining domain...
	I1212 20:32:26.506965  173294 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>cert-options-992051</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/cert-options-992051.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-cert-options-992051'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1212 20:32:26.721760  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:7e:26:50 in network default
	I1212 20:32:26.722728  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:26.722748  173294 main.go:143] libmachine: starting domain...
	I1212 20:32:26.722756  173294 main.go:143] libmachine: ensuring networks are active...
	I1212 20:32:26.723781  173294 main.go:143] libmachine: Ensuring network default is active
	I1212 20:32:26.724342  173294 main.go:143] libmachine: Ensuring network mk-cert-options-992051 is active
	I1212 20:32:26.725063  173294 main.go:143] libmachine: getting domain XML...
	I1212 20:32:26.726612  173294 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>cert-options-992051</name>
	  <uuid>0c41405e-4448-49e0-b37e-191e0f8127f1</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/cert-options-992051.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:6d:0d:f7'/>
	      <source network='mk-cert-options-992051'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7e:26:50'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1212 20:32:25.333431  171452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:32:25.333472  171452 machine.go:97] duration metric: took 6.213989836s to provisionDockerMachine
	I1212 20:32:25.333490  171452 start.go:293] postStartSetup for "pause-455927" (driver="kvm2")
	I1212 20:32:25.333533  171452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:32:25.333645  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:32:25.337074  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.337565  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.337592  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.337775  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:25.420189  171452 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:32:25.425973  171452 info.go:137] Remote host: Buildroot 2025.02
	I1212 20:32:25.426006  171452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/addons for local assets ...
	I1212 20:32:25.426085  171452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/files for local assets ...
	I1212 20:32:25.426218  171452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem -> 1399952.pem in /etc/ssl/certs
	I1212 20:32:25.426333  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:32:25.438070  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:32:25.468983  171452 start.go:296] duration metric: took 135.474539ms for postStartSetup
	I1212 20:32:25.469029  171452 fix.go:56] duration metric: took 6.353471123s for fixHost
	I1212 20:32:25.472824  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.473399  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.473438  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.473665  171452 main.go:143] libmachine: Using SSH client type: native
	I1212 20:32:25.473914  171452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 20:32:25.473927  171452 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1212 20:32:25.645611  171452 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765571545.641052606
	
	I1212 20:32:25.645646  171452 fix.go:216] guest clock: 1765571545.641052606
	I1212 20:32:25.645657  171452 fix.go:229] Guest: 2025-12-12 20:32:25.641052606 +0000 UTC Remote: 2025-12-12 20:32:25.469033507 +0000 UTC m=+20.483050535 (delta=172.019099ms)
	I1212 20:32:25.645681  171452 fix.go:200] guest clock delta is within tolerance: 172.019099ms
	I1212 20:32:25.645689  171452 start.go:83] releasing machines lock for "pause-455927", held for 6.530163251s
	I1212 20:32:25.649384  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.649926  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.649968  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.650584  171452 ssh_runner.go:195] Run: cat /version.json
	I1212 20:32:25.650718  171452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:32:25.654367  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.654533  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.654844  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.654875  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.655058  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:25.655072  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.655143  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.655321  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:25.842718  171452 ssh_runner.go:195] Run: systemctl --version
	I1212 20:32:25.860014  171452 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:32:26.064692  171452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:32:26.073724  171452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:32:26.073838  171452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:32:26.085666  171452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:32:26.085697  171452 start.go:496] detecting cgroup driver to use...
	I1212 20:32:26.085769  171452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:32:26.106590  171452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:32:26.125624  171452 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:32:26.125713  171452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:32:26.147638  171452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:32:26.163699  171452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:32:26.393067  171452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:32:26.595189  171452 docker.go:234] disabling docker service ...
	I1212 20:32:26.595269  171452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:32:26.636704  171452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:32:26.652874  171452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:32:26.940440  171452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:32:27.311497  171452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:32:27.336790  171452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:32:27.382693  171452 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:32:27.382768  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.410604  171452 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:32:27.410686  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.439715  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.472842  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.523514  171452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:32:27.551881  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.575647  171452 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.600517  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.631432  171452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:32:27.676899  171452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:32:27.704677  171452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:32:28.018140  171452 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:32:28.741066  171452 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:32:28.741192  171452 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:32:28.747422  171452 start.go:564] Will wait 60s for crictl version
	I1212 20:32:28.747503  171452 ssh_runner.go:195] Run: which crictl
	I1212 20:32:28.752239  171452 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:32:28.787287  171452 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 20:32:28.787390  171452 ssh_runner.go:195] Run: crio --version
	I1212 20:32:28.827330  171452 ssh_runner.go:195] Run: crio --version
	I1212 20:32:28.869558  171452 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1212 20:32:24.072954  173420 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 20:32:24.073023  173420 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1212 20:32:24.073039  173420 cache.go:65] Caching tarball of preloaded images
	I1212 20:32:24.073225  173420 preload.go:238] Found /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:32:24.073244  173420 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1212 20:32:24.073381  173420 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/config.json ...
	I1212 20:32:24.073411  173420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/config.json: {Name:mk180fd32bab09b100dbb701dda3e1bed2efb6a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.073614  173420 start.go:360] acquireMachinesLock for old-k8s-version-202994: {Name:mk1985c179f459a7b1b82780fe7717dfacfba5d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:32:27.229596  170878 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:32:27.488260  170878 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:32:27.587976  170878 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:32:27.588048  170878 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:32:27.829600  170878 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:32:28.206891  170878 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:32:28.657655  170878 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:32:29.092440  170878 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:32:29.155490  170878 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:32:29.156067  170878 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:32:29.158659  170878 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:32:28.873945  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:28.874455  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:28.874485  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:28.874702  171452 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 20:32:28.879956  171452 kubeadm.go:884] updating cluster {Name:pause-455927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:32:28.880092  171452 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:32:28.880151  171452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:32:28.929250  171452 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:32:28.929280  171452 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:32:28.929335  171452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:32:28.963072  171452 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:32:28.963101  171452 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:32:28.963124  171452 kubeadm.go:935] updating node { 192.168.72.217 8443 v1.34.2 crio true true} ...
	I1212 20:32:28.963253  171452 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-455927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:32:28.963346  171452 ssh_runner.go:195] Run: crio config
	I1212 20:32:29.015486  171452 cni.go:84] Creating CNI manager for ""
	I1212 20:32:29.015516  171452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:32:29.015539  171452 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:32:29.015577  171452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.217 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-455927 NodeName:pause-455927 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:32:29.015723  171452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-455927"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:32:29.015788  171452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:32:29.029071  171452 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:32:29.029174  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:32:29.041710  171452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1212 20:32:29.067137  171452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:32:29.090616  171452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1212 20:32:29.111607  171452 ssh_runner.go:195] Run: grep 192.168.72.217	control-plane.minikube.internal$ /etc/hosts
	I1212 20:32:29.116128  171452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:32:29.327166  171452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:32:29.349123  171452 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927 for IP: 192.168.72.217
	I1212 20:32:29.349152  171452 certs.go:195] generating shared ca certs ...
	I1212 20:32:29.349175  171452 certs.go:227] acquiring lock for ca certs: {Name:mk856e15c7830c27b8e705838c72180e3414c0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:29.349389  171452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key
	I1212 20:32:29.349471  171452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key
	I1212 20:32:29.349495  171452 certs.go:257] generating profile certs ...
	I1212 20:32:29.349634  171452 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/client.key
	I1212 20:32:29.349735  171452 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/apiserver.key.96be7686
	I1212 20:32:29.349799  171452 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/proxy-client.key
	I1212 20:32:29.349956  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem (1338 bytes)
	W1212 20:32:29.350014  171452 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995_empty.pem, impossibly tiny 0 bytes
	I1212 20:32:29.350025  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:32:29.350071  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:32:29.350120  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:32:29.350155  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem (1675 bytes)
	I1212 20:32:29.350216  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:32:29.351798  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:32:29.391274  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:32:29.433519  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:32:29.471577  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:32:29.505316  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 20:32:29.539859  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:32:29.582259  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:32:29.619161  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 20:32:29.657802  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /usr/share/ca-certificates/1399952.pem (1708 bytes)
	I1212 20:32:29.700212  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:32:29.813537  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem --> /usr/share/ca-certificates/139995.pem (1338 bytes)
	I1212 20:32:29.880999  171452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:32:29.917590  171452 ssh_runner.go:195] Run: openssl version
	I1212 20:32:29.930348  171452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1399952.pem
	I1212 20:32:29.954872  171452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1399952.pem /etc/ssl/certs/1399952.pem
	I1212 20:32:29.980443  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1399952.pem
	I1212 20:32:29.996827  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:43 /usr/share/ca-certificates/1399952.pem
	I1212 20:32:29.996923  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1399952.pem
	I1212 20:32:30.015024  171452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:32:29.160384  170878 out.go:252]   - Booting up control plane ...
	I1212 20:32:29.160509  170878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:32:29.160624  170878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:32:29.160901  170878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:32:29.183677  170878 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:32:29.183826  170878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:32:29.196080  170878 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:32:29.197197  170878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:32:29.197364  170878 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:32:29.426922  170878 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:32:29.427124  170878 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:32:30.926545  170878 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501341833s
	I1212 20:32:30.929373  170878 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:32:30.929533  170878 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.15:8443/livez
	I1212 20:32:30.929680  170878 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:32:30.929780  170878 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:32:28.419477  173294 main.go:143] libmachine: waiting for domain to start...
	I1212 20:32:28.421210  173294 main.go:143] libmachine: domain is now running
	I1212 20:32:28.421220  173294 main.go:143] libmachine: waiting for IP...
	I1212 20:32:28.422342  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:28.423049  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:28.423071  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:28.423464  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:28.423517  173294 retry.go:31] will retry after 223.336908ms: waiting for domain to come up
	I1212 20:32:28.649385  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:28.650210  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:28.650221  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:28.650703  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:28.650747  173294 retry.go:31] will retry after 240.829505ms: waiting for domain to come up
	I1212 20:32:28.893447  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:28.894315  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:28.894328  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:28.894713  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:28.894750  173294 retry.go:31] will retry after 433.755448ms: waiting for domain to come up
	I1212 20:32:29.330492  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:29.331307  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:29.331318  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:29.331782  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:29.331813  173294 retry.go:31] will retry after 390.882429ms: waiting for domain to come up
	I1212 20:32:29.724496  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:29.725098  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:29.725105  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:29.725542  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:29.725573  173294 retry.go:31] will retry after 607.076952ms: waiting for domain to come up
	I1212 20:32:30.334262  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:30.334910  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:30.334919  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:30.335285  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:30.335333  173294 retry.go:31] will retry after 805.814989ms: waiting for domain to come up
	I1212 20:32:31.142477  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:31.143304  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:31.143317  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:31.143704  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:31.143743  173294 retry.go:31] will retry after 1.127893412s: waiting for domain to come up
	I1212 20:32:32.273051  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:32.273788  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:32.273798  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:32.274207  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:32.274243  173294 retry.go:31] will retry after 1.029710389s: waiting for domain to come up
	I1212 20:32:30.060562  171452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.087216  171452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:32:30.101086  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.114304  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.114390  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.131600  171452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:32:30.151234  171452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.170011  171452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/139995.pem /etc/ssl/certs/139995.pem
	I1212 20:32:30.188758  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.198411  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:43 /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.198487  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.209719  171452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:32:30.232214  171452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:32:30.246171  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:32:30.263093  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:32:30.277000  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:32:30.289474  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:32:30.306371  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:32:30.326209  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:32:30.341602  171452 kubeadm.go:401] StartCluster: {Name:pause-455927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:32:30.341715  171452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:32:30.341793  171452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:32:30.407887  171452 cri.go:89] found id: "043959b7ec0676c5a46d24c6b3780b50a2ceec833a0baaa4e6e73acf1c3f2bf8"
	I1212 20:32:30.407921  171452 cri.go:89] found id: "e23dc1ae63a645bb3abf7ae15578ebd6e7104d9d8742579b8a4cd30f2880abb4"
	I1212 20:32:30.407927  171452 cri.go:89] found id: "aa17ca435607599d6287a54daf67f4b7a8658cd7f3594922d070df6c466efddd"
	I1212 20:32:30.407932  171452 cri.go:89] found id: "81a956b8db9946c63b7866caebf25c7ce9dd581d6260f913e7d4ba350ab8e284"
	I1212 20:32:30.407937  171452 cri.go:89] found id: "5232e76d229f8b27af5a043e2647c18c747064e99032f15a681f847f707b3929"
	I1212 20:32:30.407942  171452 cri.go:89] found id: "a41a2dd4009cc2e8f86406abc78371ba81607f6a718be8d5c6df050398f9e087"
	I1212 20:32:30.407947  171452 cri.go:89] found id: "746ddb3f8d9647e92c04d17355872276418fb9cc02eb1e77265d243ac56e8f7d"
	I1212 20:32:30.407952  171452 cri.go:89] found id: "e7b65b13232f2f7116344810fb35a8fcb4b7fa3b2494fa8a3afb8580b9a20436"
	I1212 20:32:30.407956  171452 cri.go:89] found id: "5b7d3025e166d04be96b4f90a1166e90c80e0e86e9ced791a4e5e1bfe0ae17ca"
	I1212 20:32:30.407979  171452 cri.go:89] found id: ""
	I1212 20:32:30.408029  171452 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-455927 -n pause-455927
helpers_test.go:270: (dbg) Run:  kubectl --context pause-455927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-455927 -n pause-455927
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-455927 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-455927 logs -n 25: (1.698481691s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-873824 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                       │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                      │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                      │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                       │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl cat docker --no-pager                                                                                                                                                                                       │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /etc/docker/daemon.json                                                                                                                                                                                           │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo docker system info                                                                                                                                                                                                    │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                   │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo containerd config dump                                                                                                                                                                                                │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ ssh     │ -p cilium-873824 sudo crio config                                                                                                                                                                                                           │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ delete  │ -p cilium-873824                                                                                                                                                                                                                            │ cilium-873824            │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │ 12 Dec 25 20:32 UTC │
	│ start   │ -p cert-options-992051 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-992051      │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	│ delete  │ -p force-systemd-env-370330                                                                                                                                                                                                                 │ force-systemd-env-370330 │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │ 12 Dec 25 20:32 UTC │
	│ start   │ -p old-k8s-version-202994 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-202994   │ jenkins │ v1.37.0 │ 12 Dec 25 20:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:32:24
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:32:24.011464  173420 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:32:24.011587  173420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:24.011592  173420 out.go:374] Setting ErrFile to fd 2...
	I1212 20:32:24.011597  173420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:24.011801  173420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:32:24.012350  173420 out.go:368] Setting JSON to false
	I1212 20:32:24.013286  173420 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8084,"bootTime":1765563460,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:32:24.013347  173420 start.go:143] virtualization: kvm guest
	I1212 20:32:24.015453  173420 out.go:179] * [old-k8s-version-202994] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:32:24.016659  173420 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:32:24.016714  173420 notify.go:221] Checking for updates...
	I1212 20:32:24.018854  173420 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:32:24.020040  173420 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 20:32:24.021120  173420 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 20:32:24.022168  173420 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:32:24.023231  173420 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:32:24.024733  173420 config.go:182] Loaded profile config "cert-expiration-391329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:24.024820  173420 config.go:182] Loaded profile config "cert-options-992051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:24.024887  173420 config.go:182] Loaded profile config "guest-095861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1212 20:32:24.024995  173420 config.go:182] Loaded profile config "pause-455927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:24.025085  173420 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:32:24.065210  173420 out.go:179] * Using the kvm2 driver based on user configuration
	I1212 20:32:24.066818  173420 start.go:309] selected driver: kvm2
	I1212 20:32:24.066841  173420 start.go:927] validating driver "kvm2" against <nil>
	I1212 20:32:24.066859  173420 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:32:24.068091  173420 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:32:24.068522  173420 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:32:24.068566  173420 cni.go:84] Creating CNI manager for ""
	I1212 20:32:24.068638  173420 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:32:24.068656  173420 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 20:32:24.068720  173420 start.go:353] cluster config:
	{Name:old-k8s-version-202994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-202994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:32:24.068871  173420 iso.go:125] acquiring lock: {Name:mka604e7c5a779b48764eb6b2b4a8a1c6683346a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:32:24.071904  173420 out.go:179] * Starting "old-k8s-version-202994" primary control-plane node in "old-k8s-version-202994" cluster
	I1212 20:32:22.301011  170878 crio.go:462] duration metric: took 1.388158221s to copy over tarball
	I1212 20:32:22.301085  170878 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 20:32:23.875552  170878 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.574442174s)
	I1212 20:32:23.875572  170878 crio.go:469] duration metric: took 1.574538056s to extract the tarball
	I1212 20:32:23.875580  170878 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 20:32:23.913005  170878 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:32:23.952246  170878 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:32:23.952259  170878 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:32:23.952266  170878 kubeadm.go:935] updating node { 192.168.61.15 8443 v1.34.2 crio true true} ...
	I1212 20:32:23.952352  170878 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-391329 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:cert-expiration-391329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:32:23.952407  170878 ssh_runner.go:195] Run: crio config
	I1212 20:32:24.005240  170878 cni.go:84] Creating CNI manager for ""
	I1212 20:32:24.005263  170878 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:32:24.005291  170878 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:32:24.005318  170878 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.15 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-391329 NodeName:cert-expiration-391329 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:32:24.005476  170878 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-391329"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.15"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.15"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:32:24.005566  170878 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:32:24.018642  170878 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:32:24.018694  170878 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:32:24.035446  170878 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1212 20:32:24.058701  170878 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:32:24.080496  170878 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1212 20:32:24.100734  170878 ssh_runner.go:195] Run: grep 192.168.61.15	control-plane.minikube.internal$ /etc/hosts
	I1212 20:32:24.104951  170878 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:32:24.119149  170878 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:32:24.267244  170878 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:32:24.303572  170878 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329 for IP: 192.168.61.15
	I1212 20:32:24.303595  170878 certs.go:195] generating shared ca certs ...
	I1212 20:32:24.303663  170878 certs.go:227] acquiring lock for ca certs: {Name:mk856e15c7830c27b8e705838c72180e3414c0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.303958  170878 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key
	I1212 20:32:24.304060  170878 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key
	I1212 20:32:24.304073  170878 certs.go:257] generating profile certs ...
	I1212 20:32:24.304201  170878 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.key
	I1212 20:32:24.304232  170878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.crt with IP's: []
	I1212 20:32:24.379004  170878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.crt ...
	I1212 20:32:24.379021  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.crt: {Name:mkb6dbe0f4a7cba5de84cc75c7603a05b1c33d65 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.379211  170878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.key ...
	I1212 20:32:24.379221  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/client.key: {Name:mkd14ab4fc5a7afaf67b9989515995075a8d3785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.379297  170878 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key.36193736
	I1212 20:32:24.379307  170878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt.36193736 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.15]
	I1212 20:32:24.459749  170878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt.36193736 ...
	I1212 20:32:24.459766  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt.36193736: {Name:mkc662c886000767a7485ffd553b7f20e3ebe7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.459936  170878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key.36193736 ...
	I1212 20:32:24.459944  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key.36193736: {Name:mk00ccb7bab9e54643f4b989bfadaaf1d639376d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.460014  170878 certs.go:382] copying /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt.36193736 -> /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt
	I1212 20:32:24.460082  170878 certs.go:386] copying /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key.36193736 -> /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key
	I1212 20:32:24.460153  170878 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.key
	I1212 20:32:24.460164  170878 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.crt with IP's: []
	I1212 20:32:24.523166  170878 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.crt ...
	I1212 20:32:24.523185  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.crt: {Name:mke03192104ef709d025614ee3d02aca2026c5cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.523364  170878 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.key ...
	I1212 20:32:24.523375  170878 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.key: {Name:mkc47887b7e10b4313f8a0f7b9cc50282cf9c8ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.523561  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem (1338 bytes)
	W1212 20:32:24.523598  170878 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995_empty.pem, impossibly tiny 0 bytes
	I1212 20:32:24.523605  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:32:24.523627  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:32:24.523647  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:32:24.523664  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem (1675 bytes)
	I1212 20:32:24.523701  170878 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:32:24.524292  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:32:24.554848  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:32:24.583821  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:32:24.614181  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:32:24.643464  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 20:32:24.672709  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:32:24.704906  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:32:24.737296  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/cert-expiration-391329/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:32:24.771095  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /usr/share/ca-certificates/1399952.pem (1708 bytes)
	I1212 20:32:24.800082  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:32:24.828469  170878 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem --> /usr/share/ca-certificates/139995.pem (1338 bytes)
	I1212 20:32:24.857298  170878 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:32:24.877294  170878 ssh_runner.go:195] Run: openssl version
	I1212 20:32:24.883422  170878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1399952.pem
	I1212 20:32:24.895031  170878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1399952.pem /etc/ssl/certs/1399952.pem
	I1212 20:32:24.906679  170878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1399952.pem
	I1212 20:32:24.911829  170878 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:43 /usr/share/ca-certificates/1399952.pem
	I1212 20:32:24.911891  170878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1399952.pem
	I1212 20:32:24.918904  170878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:32:24.930559  170878 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1399952.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:32:24.942363  170878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:24.954334  170878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:32:24.968136  170878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:24.974678  170878 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:24.974745  170878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:24.986323  170878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:32:25.003035  170878 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:32:25.017134  170878 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/139995.pem
	I1212 20:32:25.032866  170878 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/139995.pem /etc/ssl/certs/139995.pem
	I1212 20:32:25.047709  170878 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139995.pem
	I1212 20:32:25.052959  170878 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:43 /usr/share/ca-certificates/139995.pem
	I1212 20:32:25.053027  170878 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139995.pem
	I1212 20:32:25.060745  170878 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:32:25.074931  170878 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/139995.pem /etc/ssl/certs/51391683.0
	I1212 20:32:25.089595  170878 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:32:25.096043  170878 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:32:25.096104  170878 kubeadm.go:401] StartCluster: {Name:cert-expiration-391329 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:cert-expiration-391329 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.15 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:32:25.096192  170878 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:32:25.096248  170878 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:32:25.139205  170878 cri.go:89] found id: ""
	I1212 20:32:25.139292  170878 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:32:25.151191  170878 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:32:25.163260  170878 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:32:25.176936  170878 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:32:25.176947  170878 kubeadm.go:158] found existing configuration files:
	
	I1212 20:32:25.176992  170878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:32:25.188374  170878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:32:25.188452  170878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:32:25.202378  170878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:32:25.215920  170878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:32:25.215993  170878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:32:25.228084  170878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:32:25.238896  170878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:32:25.238965  170878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:32:25.254940  170878 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:32:25.266528  170878 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:32:25.266579  170878 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:32:25.278675  170878 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 20:32:25.328625  170878 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:32:25.328726  170878 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:32:25.423283  170878 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:32:25.423403  170878 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:32:25.423509  170878 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:32:25.436036  170878 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:32:25.645779  173294 start.go:364] duration metric: took 8.067588009s to acquireMachinesLock for "cert-options-992051"
	I1212 20:32:25.645846  173294 start.go:93] Provisioning new machine with config: &{Name:cert-options-992051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.34.2 ClusterName:cert-options-992051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:32:25.645957  173294 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 20:32:25.437640  170878 out.go:252]   - Generating certificates and keys ...
	I1212 20:32:25.437729  170878 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:32:25.437819  170878 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:32:25.679635  170878 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:32:26.180223  170878 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:32:26.398835  170878 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:32:26.589044  170878 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:32:26.943572  170878 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:32:26.943797  170878 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-391329 localhost] and IPs [192.168.61.15 127.0.0.1 ::1]
	I1212 20:32:27.030537  170878 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:32:27.030667  170878 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-391329 localhost] and IPs [192.168.61.15 127.0.0.1 ::1]
	I1212 20:32:25.647776  173294 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1212 20:32:25.648027  173294 start.go:159] libmachine.API.Create for "cert-options-992051" (driver="kvm2")
	I1212 20:32:25.648064  173294 client.go:173] LocalClient.Create starting
	I1212 20:32:25.648151  173294 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem
	I1212 20:32:25.648200  173294 main.go:143] libmachine: Decoding PEM data...
	I1212 20:32:25.648226  173294 main.go:143] libmachine: Parsing certificate...
	I1212 20:32:25.648289  173294 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem
	I1212 20:32:25.648312  173294 main.go:143] libmachine: Decoding PEM data...
	I1212 20:32:25.648325  173294 main.go:143] libmachine: Parsing certificate...
	I1212 20:32:25.648806  173294 main.go:143] libmachine: creating domain...
	I1212 20:32:25.648815  173294 main.go:143] libmachine: creating network...
	I1212 20:32:25.650651  173294 main.go:143] libmachine: found existing default network
	I1212 20:32:25.651083  173294 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1212 20:32:25.652634  173294 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bb5010}
	I1212 20:32:25.652817  173294 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-cert-options-992051</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1212 20:32:25.659769  173294 main.go:143] libmachine: creating private network mk-cert-options-992051 192.168.39.0/24...
	I1212 20:32:25.744922  173294 main.go:143] libmachine: private network mk-cert-options-992051 192.168.39.0/24 created
	I1212 20:32:25.745239  173294 main.go:143] libmachine: <network>
	  <name>mk-cert-options-992051</name>
	  <uuid>03d84525-441d-47d3-b610-9c7cc6186d86</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:f0:25:19'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1212 20:32:25.745273  173294 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051 ...
	I1212 20:32:25.745316  173294 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22112-135957/.minikube/cache/iso/amd64/minikube-v1.37.0-1765505725-22112-amd64.iso
	I1212 20:32:25.745325  173294 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 20:32:25.745400  173294 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22112-135957/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22112-135957/.minikube/cache/iso/amd64/minikube-v1.37.0-1765505725-22112-amd64.iso...
	I1212 20:32:26.350296  173294 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/id_rsa...
	I1212 20:32:26.505273  173294 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/cert-options-992051.rawdisk...
	I1212 20:32:26.505311  173294 main.go:143] libmachine: Writing magic tar header
	I1212 20:32:26.505337  173294 main.go:143] libmachine: Writing SSH key tar header
	I1212 20:32:26.505412  173294 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051 ...
	I1212 20:32:26.505465  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051
	I1212 20:32:26.505498  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051 (perms=drwx------)
	I1212 20:32:26.505510  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube/machines
	I1212 20:32:26.505518  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube/machines (perms=drwxr-xr-x)
	I1212 20:32:26.505528  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 20:32:26.505535  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957/.minikube (perms=drwxr-xr-x)
	I1212 20:32:26.505542  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22112-135957
	I1212 20:32:26.505549  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22112-135957 (perms=drwxrwxr-x)
	I1212 20:32:26.505556  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1212 20:32:26.505563  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 20:32:26.505569  173294 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1212 20:32:26.505575  173294 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 20:32:26.505582  173294 main.go:143] libmachine: checking permissions on dir: /home
	I1212 20:32:26.505588  173294 main.go:143] libmachine: skipping /home - not owner
	I1212 20:32:26.505590  173294 main.go:143] libmachine: defining domain...
	I1212 20:32:26.506965  173294 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>cert-options-992051</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/cert-options-992051.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-cert-options-992051'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1212 20:32:26.721760  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:7e:26:50 in network default
	I1212 20:32:26.722728  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:26.722748  173294 main.go:143] libmachine: starting domain...
	I1212 20:32:26.722756  173294 main.go:143] libmachine: ensuring networks are active...
	I1212 20:32:26.723781  173294 main.go:143] libmachine: Ensuring network default is active
	I1212 20:32:26.724342  173294 main.go:143] libmachine: Ensuring network mk-cert-options-992051 is active
	I1212 20:32:26.725063  173294 main.go:143] libmachine: getting domain XML...
	I1212 20:32:26.726612  173294 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>cert-options-992051</name>
	  <uuid>0c41405e-4448-49e0-b37e-191e0f8127f1</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22112-135957/.minikube/machines/cert-options-992051/cert-options-992051.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:6d:0d:f7'/>
	      <source network='mk-cert-options-992051'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7e:26:50'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1212 20:32:25.333431  171452 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:32:25.333472  171452 machine.go:97] duration metric: took 6.213989836s to provisionDockerMachine
	I1212 20:32:25.333490  171452 start.go:293] postStartSetup for "pause-455927" (driver="kvm2")
	I1212 20:32:25.333533  171452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:32:25.333645  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:32:25.337074  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.337565  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.337592  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.337775  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:25.420189  171452 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:32:25.425973  171452 info.go:137] Remote host: Buildroot 2025.02
	I1212 20:32:25.426006  171452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/addons for local assets ...
	I1212 20:32:25.426085  171452 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-135957/.minikube/files for local assets ...
	I1212 20:32:25.426218  171452 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem -> 1399952.pem in /etc/ssl/certs
	I1212 20:32:25.426333  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:32:25.438070  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:32:25.468983  171452 start.go:296] duration metric: took 135.474539ms for postStartSetup
	I1212 20:32:25.469029  171452 fix.go:56] duration metric: took 6.353471123s for fixHost
	I1212 20:32:25.472824  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.473399  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.473438  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.473665  171452 main.go:143] libmachine: Using SSH client type: native
	I1212 20:32:25.473914  171452 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 20:32:25.473927  171452 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1212 20:32:25.645611  171452 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765571545.641052606
	
	I1212 20:32:25.645646  171452 fix.go:216] guest clock: 1765571545.641052606
	I1212 20:32:25.645657  171452 fix.go:229] Guest: 2025-12-12 20:32:25.641052606 +0000 UTC Remote: 2025-12-12 20:32:25.469033507 +0000 UTC m=+20.483050535 (delta=172.019099ms)
	I1212 20:32:25.645681  171452 fix.go:200] guest clock delta is within tolerance: 172.019099ms
	I1212 20:32:25.645689  171452 start.go:83] releasing machines lock for "pause-455927", held for 6.530163251s
	I1212 20:32:25.649384  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.649926  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.649968  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.650584  171452 ssh_runner.go:195] Run: cat /version.json
	I1212 20:32:25.650718  171452 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:32:25.654367  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.654533  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.654844  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.654875  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.655058  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:25.655072  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:25.655143  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:25.655321  171452 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/pause-455927/id_rsa Username:docker}
	I1212 20:32:25.842718  171452 ssh_runner.go:195] Run: systemctl --version
	I1212 20:32:25.860014  171452 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:32:26.064692  171452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:32:26.073724  171452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:32:26.073838  171452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:32:26.085666  171452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:32:26.085697  171452 start.go:496] detecting cgroup driver to use...
	I1212 20:32:26.085769  171452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:32:26.106590  171452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:32:26.125624  171452 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:32:26.125713  171452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:32:26.147638  171452 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:32:26.163699  171452 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:32:26.393067  171452 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:32:26.595189  171452 docker.go:234] disabling docker service ...
	I1212 20:32:26.595269  171452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:32:26.636704  171452 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:32:26.652874  171452 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:32:26.940440  171452 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:32:27.311497  171452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:32:27.336790  171452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:32:27.382693  171452 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:32:27.382768  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.410604  171452 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:32:27.410686  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.439715  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.472842  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.523514  171452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:32:27.551881  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.575647  171452 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.600517  171452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:32:27.631432  171452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:32:27.676899  171452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:32:27.704677  171452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:32:28.018140  171452 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:32:28.741066  171452 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:32:28.741192  171452 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:32:28.747422  171452 start.go:564] Will wait 60s for crictl version
	I1212 20:32:28.747503  171452 ssh_runner.go:195] Run: which crictl
	I1212 20:32:28.752239  171452 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 20:32:28.787287  171452 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 20:32:28.787390  171452 ssh_runner.go:195] Run: crio --version
	I1212 20:32:28.827330  171452 ssh_runner.go:195] Run: crio --version
	I1212 20:32:28.869558  171452 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1212 20:32:24.072954  173420 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 20:32:24.073023  173420 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1212 20:32:24.073039  173420 cache.go:65] Caching tarball of preloaded images
	I1212 20:32:24.073225  173420 preload.go:238] Found /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 20:32:24.073244  173420 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1212 20:32:24.073381  173420 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/config.json ...
	I1212 20:32:24.073411  173420 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/config.json: {Name:mk180fd32bab09b100dbb701dda3e1bed2efb6a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:24.073614  173420 start.go:360] acquireMachinesLock for old-k8s-version-202994: {Name:mk1985c179f459a7b1b82780fe7717dfacfba5d1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 20:32:27.229596  170878 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:32:27.488260  170878 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:32:27.587976  170878 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:32:27.588048  170878 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:32:27.829600  170878 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:32:28.206891  170878 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:32:28.657655  170878 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:32:29.092440  170878 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:32:29.155490  170878 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:32:29.156067  170878 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:32:29.158659  170878 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:32:28.873945  171452 main.go:143] libmachine: domain pause-455927 has defined MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:28.874455  171452 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f6:ca:7d", ip: ""} in network mk-pause-455927: {Iface:virbr4 ExpiryTime:2025-12-12 21:31:01 +0000 UTC Type:0 Mac:52:54:00:f6:ca:7d Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:pause-455927 Clientid:01:52:54:00:f6:ca:7d}
	I1212 20:32:28.874485  171452 main.go:143] libmachine: domain pause-455927 has defined IP address 192.168.72.217 and MAC address 52:54:00:f6:ca:7d in network mk-pause-455927
	I1212 20:32:28.874702  171452 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 20:32:28.879956  171452 kubeadm.go:884] updating cluster {Name:pause-455927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:32:28.880092  171452 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:32:28.880151  171452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:32:28.929250  171452 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:32:28.929280  171452 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:32:28.929335  171452 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:32:28.963072  171452 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:32:28.963101  171452 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:32:28.963124  171452 kubeadm.go:935] updating node { 192.168.72.217 8443 v1.34.2 crio true true} ...
	I1212 20:32:28.963253  171452 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-455927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:32:28.963346  171452 ssh_runner.go:195] Run: crio config
	I1212 20:32:29.015486  171452 cni.go:84] Creating CNI manager for ""
	I1212 20:32:29.015516  171452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 20:32:29.015539  171452 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:32:29.015577  171452 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.217 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-455927 NodeName:pause-455927 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:32:29.015723  171452 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-455927"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:32:29.015788  171452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:32:29.029071  171452 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:32:29.029174  171452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:32:29.041710  171452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1212 20:32:29.067137  171452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:32:29.090616  171452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1212 20:32:29.111607  171452 ssh_runner.go:195] Run: grep 192.168.72.217	control-plane.minikube.internal$ /etc/hosts
	I1212 20:32:29.116128  171452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:32:29.327166  171452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:32:29.349123  171452 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927 for IP: 192.168.72.217
	I1212 20:32:29.349152  171452 certs.go:195] generating shared ca certs ...
	I1212 20:32:29.349175  171452 certs.go:227] acquiring lock for ca certs: {Name:mk856e15c7830c27b8e705838c72180e3414c0f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:32:29.349389  171452 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key
	I1212 20:32:29.349471  171452 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key
	I1212 20:32:29.349495  171452 certs.go:257] generating profile certs ...
	I1212 20:32:29.349634  171452 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/client.key
	I1212 20:32:29.349735  171452 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/apiserver.key.96be7686
	I1212 20:32:29.349799  171452 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/proxy-client.key
	I1212 20:32:29.349956  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem (1338 bytes)
	W1212 20:32:29.350014  171452 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995_empty.pem, impossibly tiny 0 bytes
	I1212 20:32:29.350025  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:32:29.350071  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/ca.pem (1078 bytes)
	I1212 20:32:29.350120  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:32:29.350155  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/certs/key.pem (1675 bytes)
	I1212 20:32:29.350216  171452 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem (1708 bytes)
	I1212 20:32:29.351798  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:32:29.391274  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 20:32:29.433519  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:32:29.471577  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1212 20:32:29.505316  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 20:32:29.539859  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:32:29.582259  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:32:29.619161  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 20:32:29.657802  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/ssl/certs/1399952.pem --> /usr/share/ca-certificates/1399952.pem (1708 bytes)
	I1212 20:32:29.700212  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:32:29.813537  171452 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-135957/.minikube/certs/139995.pem --> /usr/share/ca-certificates/139995.pem (1338 bytes)
	I1212 20:32:29.880999  171452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:32:29.917590  171452 ssh_runner.go:195] Run: openssl version
	I1212 20:32:29.930348  171452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1399952.pem
	I1212 20:32:29.954872  171452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1399952.pem /etc/ssl/certs/1399952.pem
	I1212 20:32:29.980443  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1399952.pem
	I1212 20:32:29.996827  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 19:43 /usr/share/ca-certificates/1399952.pem
	I1212 20:32:29.996923  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1399952.pem
	I1212 20:32:30.015024  171452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:32:29.160384  170878 out.go:252]   - Booting up control plane ...
	I1212 20:32:29.160509  170878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:32:29.160624  170878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:32:29.160901  170878 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:32:29.183677  170878 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:32:29.183826  170878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:32:29.196080  170878 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:32:29.197197  170878 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:32:29.197364  170878 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:32:29.426922  170878 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:32:29.427124  170878 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:32:30.926545  170878 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501341833s
	I1212 20:32:30.929373  170878 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:32:30.929533  170878 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.15:8443/livez
	I1212 20:32:30.929680  170878 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:32:30.929780  170878 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:32:28.419477  173294 main.go:143] libmachine: waiting for domain to start...
	I1212 20:32:28.421210  173294 main.go:143] libmachine: domain is now running
	I1212 20:32:28.421220  173294 main.go:143] libmachine: waiting for IP...
	I1212 20:32:28.422342  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:28.423049  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:28.423071  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:28.423464  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:28.423517  173294 retry.go:31] will retry after 223.336908ms: waiting for domain to come up
	I1212 20:32:28.649385  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:28.650210  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:28.650221  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:28.650703  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:28.650747  173294 retry.go:31] will retry after 240.829505ms: waiting for domain to come up
	I1212 20:32:28.893447  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:28.894315  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:28.894328  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:28.894713  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:28.894750  173294 retry.go:31] will retry after 433.755448ms: waiting for domain to come up
	I1212 20:32:29.330492  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:29.331307  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:29.331318  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:29.331782  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:29.331813  173294 retry.go:31] will retry after 390.882429ms: waiting for domain to come up
	I1212 20:32:29.724496  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:29.725098  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:29.725105  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:29.725542  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:29.725573  173294 retry.go:31] will retry after 607.076952ms: waiting for domain to come up
	I1212 20:32:30.334262  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:30.334910  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:30.334919  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:30.335285  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:30.335333  173294 retry.go:31] will retry after 805.814989ms: waiting for domain to come up
	I1212 20:32:31.142477  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:31.143304  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:31.143317  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:31.143704  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:31.143743  173294 retry.go:31] will retry after 1.127893412s: waiting for domain to come up
	I1212 20:32:32.273051  173294 main.go:143] libmachine: domain cert-options-992051 has defined MAC address 52:54:00:6d:0d:f7 in network mk-cert-options-992051
	I1212 20:32:32.273788  173294 main.go:143] libmachine: no network interface addresses found for domain cert-options-992051 (source=lease)
	I1212 20:32:32.273798  173294 main.go:143] libmachine: trying to list again with source=arp
	I1212 20:32:32.274207  173294 main.go:143] libmachine: unable to find current IP address of domain cert-options-992051 in network mk-cert-options-992051 (interfaces detected: [])
	I1212 20:32:32.274243  173294 retry.go:31] will retry after 1.029710389s: waiting for domain to come up
	I1212 20:32:30.060562  171452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.087216  171452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:32:30.101086  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.114304  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 19:30 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.114390  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:32:30.131600  171452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:32:30.151234  171452 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.170011  171452 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/139995.pem /etc/ssl/certs/139995.pem
	I1212 20:32:30.188758  171452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.198411  171452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 19:43 /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.198487  171452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139995.pem
	I1212 20:32:30.209719  171452 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:32:30.232214  171452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:32:30.246171  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:32:30.263093  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:32:30.277000  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:32:30.289474  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:32:30.306371  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:32:30.326209  171452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:32:30.341602  171452 kubeadm.go:401] StartCluster: {Name:pause-455927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-455927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:32:30.341715  171452 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:32:30.341793  171452 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:32:30.407887  171452 cri.go:89] found id: "043959b7ec0676c5a46d24c6b3780b50a2ceec833a0baaa4e6e73acf1c3f2bf8"
	I1212 20:32:30.407921  171452 cri.go:89] found id: "e23dc1ae63a645bb3abf7ae15578ebd6e7104d9d8742579b8a4cd30f2880abb4"
	I1212 20:32:30.407927  171452 cri.go:89] found id: "aa17ca435607599d6287a54daf67f4b7a8658cd7f3594922d070df6c466efddd"
	I1212 20:32:30.407932  171452 cri.go:89] found id: "81a956b8db9946c63b7866caebf25c7ce9dd581d6260f913e7d4ba350ab8e284"
	I1212 20:32:30.407937  171452 cri.go:89] found id: "5232e76d229f8b27af5a043e2647c18c747064e99032f15a681f847f707b3929"
	I1212 20:32:30.407942  171452 cri.go:89] found id: "a41a2dd4009cc2e8f86406abc78371ba81607f6a718be8d5c6df050398f9e087"
	I1212 20:32:30.407947  171452 cri.go:89] found id: "746ddb3f8d9647e92c04d17355872276418fb9cc02eb1e77265d243ac56e8f7d"
	I1212 20:32:30.407952  171452 cri.go:89] found id: "e7b65b13232f2f7116344810fb35a8fcb4b7fa3b2494fa8a3afb8580b9a20436"
	I1212 20:32:30.407956  171452 cri.go:89] found id: "5b7d3025e166d04be96b4f90a1166e90c80e0e86e9ced791a4e5e1bfe0ae17ca"
	I1212 20:32:30.407979  171452 cri.go:89] found id: ""
	I1212 20:32:30.408029  171452 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-455927 -n pause-455927
helpers_test.go:270: (dbg) Run:  kubectl --context pause-455927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.43s)

                                                
                                    

Test pass (376/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 26.26
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 11.52
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.16
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 12.53
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.15
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.66
31 TestOffline 119.44
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 132.11
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/serial/GCPAuth/FakeCredentials 12.51
44 TestAddons/parallel/Registry 18.1
45 TestAddons/parallel/RegistryCreds 0.76
47 TestAddons/parallel/InspektorGadget 10.91
48 TestAddons/parallel/MetricsServer 6.62
50 TestAddons/parallel/CSI 46.71
51 TestAddons/parallel/Headlamp 21.04
52 TestAddons/parallel/CloudSpanner 6.62
53 TestAddons/parallel/LocalPath 20.06
54 TestAddons/parallel/NvidiaDevicePlugin 6.59
55 TestAddons/parallel/Yakd 11.23
57 TestAddons/StoppedEnableDisable 89.41
58 TestCertOptions 50.66
59 TestCertExpiration 352.4
61 TestForceSystemdFlag 60.23
62 TestForceSystemdEnv 55.35
67 TestErrorSpam/setup 40.04
68 TestErrorSpam/start 0.35
69 TestErrorSpam/status 0.66
70 TestErrorSpam/pause 1.45
71 TestErrorSpam/unpause 1.76
72 TestErrorSpam/stop 87.31
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 77.86
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 52.31
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
84 TestFunctional/serial/CacheCmd/cache/add_local 2.24
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.51
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 41.94
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.23
95 TestFunctional/serial/LogsFileCmd 1.24
96 TestFunctional/serial/InvalidService 4.48
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 26.12
100 TestFunctional/parallel/DryRun 0.25
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.73
106 TestFunctional/parallel/ServiceCmdConnect 29.51
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 56.24
110 TestFunctional/parallel/SSHCmd 0.34
111 TestFunctional/parallel/CpCmd 1.28
112 TestFunctional/parallel/MySQL 40.08
113 TestFunctional/parallel/FileSync 0.19
114 TestFunctional/parallel/CertSync 1.23
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
122 TestFunctional/parallel/License 0.52
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
125 TestFunctional/parallel/ProfileCmd/profile_list 0.45
126 TestFunctional/parallel/MountCmd/any-port 10.38
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
128 TestFunctional/parallel/Version/short 0.06
129 TestFunctional/parallel/Version/components 0.39
130 TestFunctional/parallel/ServiceCmd/List 0.41
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
133 TestFunctional/parallel/ServiceCmd/Format 0.26
134 TestFunctional/parallel/ServiceCmd/URL 0.24
135 TestFunctional/parallel/MountCmd/specific-port 1.38
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
149 TestFunctional/parallel/ImageCommands/ImageBuild 14.91
150 TestFunctional/parallel/ImageCommands/Setup 1.97
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.19
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.6
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.85
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.79
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.2
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 77.81
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 36.04
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.08
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.25
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.2
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.48
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 33.66
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.22
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.25
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.22
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.44
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 16.54
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.24
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.83
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 21.13
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.18
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 36.47
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.35
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.21
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 33.7
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.17
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.08
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.35
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.39
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.2
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.54
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.33
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.28
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.25
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.21
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.09
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.93
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.08
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.37
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.36
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.32
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.86
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.75
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.64
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 2.14
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 5.7
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.27
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.35
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.3
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.3
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 5.54
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 9.8
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.34
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.22
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 204.91
262 TestMultiControlPlane/serial/DeployApp 7.14
263 TestMultiControlPlane/serial/PingHostFromPods 1.29
264 TestMultiControlPlane/serial/AddWorkerNode 45.11
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
267 TestMultiControlPlane/serial/CopyFile 10.8
268 TestMultiControlPlane/serial/StopSecondaryNode 76.79
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
270 TestMultiControlPlane/serial/RestartSecondaryNode 34.87
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 378.76
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.08
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
275 TestMultiControlPlane/serial/StopCluster 259.03
276 TestMultiControlPlane/serial/RestartCluster 90.78
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.49
278 TestMultiControlPlane/serial/AddSecondaryNode 70.19
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.66
284 TestJSONOutput/start/Command 76.47
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.65
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.6
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.8
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.23
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 81.44
316 TestMountStart/serial/StartWithMountFirst 21.87
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 19.12
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.7
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.26
323 TestMountStart/serial/RestartStopped 18.8
324 TestMountStart/serial/VerifyMountPostStop 0.31
327 TestMultiNode/serial/FreshStart2Nodes 96.21
328 TestMultiNode/serial/DeployApp2Nodes 5.87
329 TestMultiNode/serial/PingHostFrom2Pods 0.94
330 TestMultiNode/serial/AddNode 44.35
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.45
333 TestMultiNode/serial/CopyFile 5.98
334 TestMultiNode/serial/StopNode 2.16
335 TestMultiNode/serial/StartAfterStop 36.4
336 TestMultiNode/serial/RestartKeepsNodes 280.9
337 TestMultiNode/serial/DeleteNode 2.54
338 TestMultiNode/serial/StopMultiNode 168.76
339 TestMultiNode/serial/RestartMultiNode 82.28
340 TestMultiNode/serial/ValidateNameConflict 38.82
347 TestScheduledStopUnix 106.23
351 TestRunningBinaryUpgrade 118.15
353 TestKubernetesUpgrade 147.32
355 TestISOImage/Setup 35.98
357 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
365 TestNoKubernetes/serial/StartWithK8s 91.08
367 TestISOImage/Binaries/crictl 0.18
368 TestISOImage/Binaries/curl 0.18
369 TestISOImage/Binaries/docker 0.18
370 TestISOImage/Binaries/git 0.18
371 TestISOImage/Binaries/iptables 0.18
372 TestISOImage/Binaries/podman 0.17
373 TestISOImage/Binaries/rsync 0.17
374 TestISOImage/Binaries/socat 0.17
375 TestISOImage/Binaries/wget 0.18
376 TestISOImage/Binaries/VBoxControl 0.17
377 TestISOImage/Binaries/VBoxService 0.18
378 TestStoppedBinaryUpgrade/Setup 3.71
379 TestStoppedBinaryUpgrade/Upgrade 157.07
380 TestNoKubernetes/serial/StartWithStopK8s 36.52
381 TestNoKubernetes/serial/Start 42.06
383 TestPause/serial/Start 113.1
384 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
385 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
386 TestNoKubernetes/serial/ProfileList 6.15
387 TestNoKubernetes/serial/Stop 1.23
388 TestNoKubernetes/serial/StartNoArgs 45.76
389 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
390 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
399 TestNetworkPlugins/group/false 4.18
404 TestStartStop/group/old-k8s-version/serial/FirstStart 115.98
406 TestStartStop/group/no-preload/serial/FirstStart 105.28
408 TestStartStop/group/embed-certs/serial/FirstStart 105.65
409 TestStartStop/group/old-k8s-version/serial/DeployApp 11.34
410 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.03
411 TestStartStop/group/old-k8s-version/serial/Stop 73.09
412 TestStartStop/group/no-preload/serial/DeployApp 11.28
413 TestStartStop/group/embed-certs/serial/DeployApp 11.28
414 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
415 TestStartStop/group/no-preload/serial/Stop 82.99
416 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
417 TestStartStop/group/embed-certs/serial/Stop 90.24
418 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
419 TestStartStop/group/old-k8s-version/serial/SecondStart 46.08
420 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
421 TestStartStop/group/no-preload/serial/SecondStart 55.37
422 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 11.01
423 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
424 TestStartStop/group/embed-certs/serial/SecondStart 47.81
425 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
426 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
427 TestStartStop/group/old-k8s-version/serial/Pause 2.98
429 TestStartStop/group/newest-cni/serial/FirstStart 48.7
430 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
431 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
432 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
433 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
434 TestStartStop/group/no-preload/serial/Pause 3.04
435 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
437 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.66
438 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
439 TestStartStop/group/embed-certs/serial/Pause 2.76
440 TestNetworkPlugins/group/auto/Start 93.26
441 TestNetworkPlugins/group/kindnet/Start 88.55
442 TestStartStop/group/newest-cni/serial/DeployApp 0
443 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
444 TestStartStop/group/newest-cni/serial/Stop 6.95
445 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
446 TestStartStop/group/newest-cni/serial/SecondStart 79.24
447 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.4
448 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
449 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
450 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
452 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
453 TestStartStop/group/newest-cni/serial/Pause 2.72
454 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.49
455 TestNetworkPlugins/group/auto/KubeletFlags 0.18
456 TestNetworkPlugins/group/auto/NetCatPod 11.25
457 TestNetworkPlugins/group/calico/Start 87.9
458 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
459 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
460 TestNetworkPlugins/group/auto/DNS 0.16
461 TestNetworkPlugins/group/auto/Localhost 0.12
462 TestNetworkPlugins/group/auto/HairPin 0.13
463 TestNetworkPlugins/group/kindnet/DNS 0.16
464 TestNetworkPlugins/group/kindnet/Localhost 0.15
465 TestNetworkPlugins/group/kindnet/HairPin 0.14
466 TestNetworkPlugins/group/custom-flannel/Start 71.74
467 TestNetworkPlugins/group/enable-default-cni/Start 99.61
468 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
469 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.29
470 TestNetworkPlugins/group/calico/ControllerPod 6.01
471 TestNetworkPlugins/group/calico/KubeletFlags 0.18
472 TestNetworkPlugins/group/calico/NetCatPod 11.25
473 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
474 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
475 TestNetworkPlugins/group/custom-flannel/DNS 0.18
476 TestNetworkPlugins/group/calico/DNS 0.3
477 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
478 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
479 TestNetworkPlugins/group/calico/Localhost 0.41
480 TestNetworkPlugins/group/calico/HairPin 0.27
481 TestNetworkPlugins/group/flannel/Start 73.61
482 TestNetworkPlugins/group/bridge/Start 105.14
483 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
484 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
485 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9
486 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
487 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
488 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
489 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
490 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
491 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.18
493 TestISOImage/PersistentMounts//data 0.21
494 TestISOImage/PersistentMounts//var/lib/docker 0.18
495 TestISOImage/PersistentMounts//var/lib/cni 0.19
496 TestISOImage/PersistentMounts//var/lib/kubelet 0.2
497 TestISOImage/PersistentMounts//var/lib/minikube 0.22
498 TestISOImage/PersistentMounts//var/lib/toolbox 0.2
499 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
500 TestISOImage/VersionJSON 0.4
501 TestISOImage/eBPFSupport 0.33
502 TestNetworkPlugins/group/flannel/ControllerPod 6.01
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
504 TestNetworkPlugins/group/flannel/NetCatPod 10.24
505 TestNetworkPlugins/group/flannel/DNS 0.15
506 TestNetworkPlugins/group/flannel/Localhost 0.11
507 TestNetworkPlugins/group/flannel/HairPin 0.12
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
509 TestNetworkPlugins/group/bridge/NetCatPod 9.21
510 TestNetworkPlugins/group/bridge/DNS 0.13
511 TestNetworkPlugins/group/bridge/Localhost 0.13
512 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (26.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-442866 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-442866 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.259257726s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (26.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1212 19:29:33.656329  139995 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1212 19:29:33.656556  139995 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-442866
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-442866: exit status 85 (75.323733ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-442866 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-442866 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:29:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:29:07.449351  140007 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:29:07.449646  140007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:07.449657  140007 out.go:374] Setting ErrFile to fd 2...
	I1212 19:29:07.449662  140007 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:07.449820  140007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	W1212 19:29:07.449931  140007 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22112-135957/.minikube/config/config.json: open /home/jenkins/minikube-integration/22112-135957/.minikube/config/config.json: no such file or directory
	I1212 19:29:07.450466  140007 out.go:368] Setting JSON to true
	I1212 19:29:07.451954  140007 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4287,"bootTime":1765563460,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:29:07.452012  140007 start.go:143] virtualization: kvm guest
	I1212 19:29:07.455228  140007 out.go:99] [download-only-442866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1212 19:29:07.455340  140007 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 19:29:07.455375  140007 notify.go:221] Checking for updates...
	I1212 19:29:07.456451  140007 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:29:07.457623  140007 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:29:07.458649  140007 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 19:29:07.459762  140007 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:29:07.460744  140007 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 19:29:07.462616  140007 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:29:07.462916  140007 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:29:07.973295  140007 out.go:99] Using the kvm2 driver based on user configuration
	I1212 19:29:07.973340  140007 start.go:309] selected driver: kvm2
	I1212 19:29:07.973349  140007 start.go:927] validating driver "kvm2" against <nil>
	I1212 19:29:07.973753  140007 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:29:07.974342  140007 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1212 19:29:07.974513  140007 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:29:07.974540  140007 cni.go:84] Creating CNI manager for ""
	I1212 19:29:07.974595  140007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:29:07.974603  140007 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 19:29:07.974661  140007 start.go:353] cluster config:
	{Name:download-only-442866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-442866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:29:07.974842  140007 iso.go:125] acquiring lock: {Name:mka604e7c5a779b48764eb6b2b4a8a1c6683346a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:29:07.976229  140007 out.go:99] Downloading VM boot image ...
	I1212 19:29:07.976288  140007 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22112-135957/.minikube/cache/iso/amd64/minikube-v1.37.0-1765505725-22112-amd64.iso
	I1212 19:29:19.948223  140007 out.go:99] Starting "download-only-442866" primary control-plane node in "download-only-442866" cluster
	I1212 19:29:19.948268  140007 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 19:29:20.064805  140007 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1212 19:29:20.064847  140007 cache.go:65] Caching tarball of preloaded images
	I1212 19:29:20.065609  140007 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 19:29:20.067217  140007 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1212 19:29:20.067233  140007 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1212 19:29:20.181179  140007 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1212 19:29:20.181348  140007 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-442866 host does not exist
	  To start a cluster, run: "minikube start -p download-only-442866"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-442866
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (11.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-677129 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-677129 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.516105293s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (11.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1212 19:29:45.558167  139995 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1212 19:29:45.558210  139995 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-677129
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-677129: exit status 85 (73.752857ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-442866 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-442866 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ delete  │ -p download-only-442866                                                                                                                                                 │ download-only-442866 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ -o=json --download-only -p download-only-677129 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-677129 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:29:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:29:34.096720  140266 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:29:34.096983  140266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:34.096994  140266 out.go:374] Setting ErrFile to fd 2...
	I1212 19:29:34.096999  140266 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:34.097261  140266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 19:29:34.097820  140266 out.go:368] Setting JSON to true
	I1212 19:29:34.098786  140266 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4314,"bootTime":1765563460,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:29:34.098857  140266 start.go:143] virtualization: kvm guest
	I1212 19:29:34.100723  140266 out.go:99] [download-only-677129] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:29:34.100919  140266 notify.go:221] Checking for updates...
	I1212 19:29:34.102048  140266 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:29:34.103403  140266 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:29:34.104892  140266 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 19:29:34.106102  140266 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:29:34.107352  140266 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 19:29:34.109760  140266 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:29:34.110101  140266 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:29:34.140761  140266 out.go:99] Using the kvm2 driver based on user configuration
	I1212 19:29:34.140800  140266 start.go:309] selected driver: kvm2
	I1212 19:29:34.140809  140266 start.go:927] validating driver "kvm2" against <nil>
	I1212 19:29:34.141178  140266 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:29:34.141648  140266 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1212 19:29:34.141796  140266 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:29:34.141821  140266 cni.go:84] Creating CNI manager for ""
	I1212 19:29:34.141867  140266 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:29:34.141878  140266 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 19:29:34.141916  140266 start.go:353] cluster config:
	{Name:download-only-677129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-677129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:29:34.142011  140266 iso.go:125] acquiring lock: {Name:mka604e7c5a779b48764eb6b2b4a8a1c6683346a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:29:34.143245  140266 out.go:99] Starting "download-only-677129" primary control-plane node in "download-only-677129" cluster
	I1212 19:29:34.143268  140266 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:29:34.246898  140266 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1212 19:29:34.246950  140266 cache.go:65] Caching tarball of preloaded images
	I1212 19:29:34.247835  140266 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 19:29:34.249389  140266 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1212 19:29:34.249409  140266 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1212 19:29:34.361060  140266 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1212 19:29:34.361106  140266 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-677129 host does not exist
	  To start a cluster, run: "minikube start -p download-only-677129"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-677129
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (12.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-167722 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-167722 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.532139796s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (12.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1212 19:29:58.467822  139995 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1212 19:29:58.467879  139995 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-167722
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-167722: exit status 85 (71.548597ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-442866 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-442866 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ delete  │ -p download-only-442866                                                                                                                                                        │ download-only-442866 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ -o=json --download-only -p download-only-677129 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-677129 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ delete  │ -p download-only-677129                                                                                                                                                        │ download-only-677129 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │ 12 Dec 25 19:29 UTC │
	│ start   │ -o=json --download-only -p download-only-167722 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-167722 │ jenkins │ v1.37.0 │ 12 Dec 25 19:29 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 19:29:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 19:29:45.987372  140479 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:29:45.987625  140479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:45.987634  140479 out.go:374] Setting ErrFile to fd 2...
	I1212 19:29:45.987638  140479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:29:45.987824  140479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 19:29:45.988288  140479 out.go:368] Setting JSON to true
	I1212 19:29:45.989105  140479 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4326,"bootTime":1765563460,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:29:45.989169  140479 start.go:143] virtualization: kvm guest
	I1212 19:29:45.990919  140479 out.go:99] [download-only-167722] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:29:45.991085  140479 notify.go:221] Checking for updates...
	I1212 19:29:45.992791  140479 out.go:171] MINIKUBE_LOCATION=22112
	I1212 19:29:45.993901  140479 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:29:45.994939  140479 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 19:29:45.995983  140479 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:29:45.999277  140479 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1212 19:29:46.001365  140479 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 19:29:46.001579  140479 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:29:46.032651  140479 out.go:99] Using the kvm2 driver based on user configuration
	I1212 19:29:46.032685  140479 start.go:309] selected driver: kvm2
	I1212 19:29:46.032704  140479 start.go:927] validating driver "kvm2" against <nil>
	I1212 19:29:46.033013  140479 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 19:29:46.033535  140479 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1212 19:29:46.033678  140479 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 19:29:46.033699  140479 cni.go:84] Creating CNI manager for ""
	I1212 19:29:46.033751  140479 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 19:29:46.033760  140479 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 19:29:46.033804  140479 start.go:353] cluster config:
	{Name:download-only-167722 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-167722 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:29:46.033892  140479 iso.go:125] acquiring lock: {Name:mka604e7c5a779b48764eb6b2b4a8a1c6683346a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 19:29:46.035128  140479 out.go:99] Starting "download-only-167722" primary control-plane node in "download-only-167722" cluster
	I1212 19:29:46.035153  140479 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 19:29:46.562496  140479 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1212 19:29:46.562536  140479 cache.go:65] Caching tarball of preloaded images
	I1212 19:29:46.563388  140479 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 19:29:46.564975  140479 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1212 19:29:46.564997  140479 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1212 19:29:46.677401  140479 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1212 19:29:46.677451  140479 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22112-135957/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-167722 host does not exist
	  To start a cluster, run: "minikube start -p download-only-167722"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-167722
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1212 19:29:59.275317  139995 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-604879 --alsologtostderr --binary-mirror http://127.0.0.1:35119 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-604879" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-604879
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (119.44s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-066744 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-066744 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m58.379877618s)
helpers_test.go:176: Cleaning up "offline-crio-066744" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-066744
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-066744: (1.057801572s)
--- PASS: TestOffline (119.44s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-347541
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-347541: exit status 85 (70.809166ms)

                                                
                                                
-- stdout --
	* Profile "addons-347541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-347541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-347541
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-347541: exit status 85 (70.940513ms)

                                                
                                                
-- stdout --
	* Profile "addons-347541" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-347541"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (132.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-347541 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-347541 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m12.11317782s)
--- PASS: TestAddons/Setup (132.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-347541 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-347541 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-347541 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-347541 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [86482a73-fed6-4ee2-93dd-8079de7542f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [86482a73-fed6-4ee2-93dd-8079de7542f0] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.004301773s
addons_test.go:696: (dbg) Run:  kubectl --context addons-347541 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-347541 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-347541 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 6.726788ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-5td7r" [201134be-c27b-4ed0-83ec-71d107dac0c0] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.065220967s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-gxsjd" [0943e635-926e-40e1-9444-adcc285ac289] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.009069959s
addons_test.go:394: (dbg) Run:  kubectl --context addons-347541 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-347541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-347541 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.231874881s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.10s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.162516ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-347541
addons_test.go:334: (dbg) Run:  kubectl --context addons-347541 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-9ff97" [84c2a1e4-d4fc-4ee8-b67c-d959db1c1dfc] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00726575s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 addons disable inspektor-gadget --alsologtostderr -v=1: (5.902663357s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 8.664627ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-tmr5k" [5dd23de9-3bea-45d2-b80b-4b966bf80193] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.063721968s
addons_test.go:465: (dbg) Run:  kubectl --context addons-347541 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 addons disable metrics-server --alsologtostderr -v=1: (1.464828569s)
--- PASS: TestAddons/parallel/MetricsServer (6.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1212 19:32:52.113403  139995 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1212 19:32:52.117556  139995 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1212 19:32:52.117583  139995 kapi.go:107] duration metric: took 4.196028ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.208111ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-347541 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-347541 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [893d927a-949c-4469-8ece-0101dbc7644b] Pending
helpers_test.go:353: "task-pv-pod" [893d927a-949c-4469-8ece-0101dbc7644b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [893d927a-949c-4469-8ece-0101dbc7644b] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004532997s
addons_test.go:574: (dbg) Run:  kubectl --context addons-347541 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-347541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-347541 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-347541 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-347541 delete pod task-pv-pod: (1.065268141s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-347541 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-347541 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-347541 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [6592c32c-526a-4f53-bbde-212f9502526f] Pending
helpers_test.go:353: "task-pv-pod-restore" [6592c32c-526a-4f53-bbde-212f9502526f] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.004100241s
addons_test.go:616: (dbg) Run:  kubectl --context addons-347541 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-347541 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-347541 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.790194981s)
--- PASS: TestAddons/parallel/CSI (46.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-347541 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-h9pxs" [17fa9ed9-d1c7-4882-8f1c-c71c8ae0a883] Pending
helpers_test.go:353: "headlamp-dfcdc64b-h9pxs" [17fa9ed9-d1c7-4882-8f1c-c71c8ae0a883] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-h9pxs" [17fa9ed9-d1c7-4882-8f1c-c71c8ae0a883] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.005020478s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 addons disable headlamp --alsologtostderr -v=1: (6.155044279s)
--- PASS: TestAddons/parallel/Headlamp (21.04s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-82v4j" [49a5c3eb-443e-4286-9517-5c7e66f3faf6] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003724416s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (20.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-347541 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-347541 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
2025/12/12 19:32:51 [DEBUG] GET http://192.168.39.202:5000
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [a305cc6e-d855-4cbd-ad03-f897840bd6f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [a305cc6e-d855-4cbd-ad03-f897840bd6f7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [a305cc6e-d855-4cbd-ad03-f897840bd6f7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.004259871s
addons_test.go:969: (dbg) Run:  kubectl --context addons-347541 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 ssh "cat /opt/local-path-provisioner/pvc-c45c01d7-a7ea-4447-bcca-5299d5d7b030_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-347541 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-347541 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (20.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-s9zn5" [9049612d-22d5-42ee-a561-b6acda7ef4e9] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006458954s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-g8n6k" [e1161cdb-9bd2-46d8-b6bb-6aa5a3afcf78] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.071222137s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-347541 addons disable yakd --alsologtostderr -v=1: (6.159612469s)
--- PASS: TestAddons/parallel/Yakd (11.23s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (89.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-347541
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-347541: (1m29.189359846s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-347541
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-347541
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-347541
--- PASS: TestAddons/StoppedEnableDisable (89.41s)

                                                
                                    
x
+
TestCertOptions (50.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-992051 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1212 20:32:22.585578  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-992051 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (48.349497151s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-992051 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-992051 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-992051 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-992051" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-992051
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-992051: (1.907457422s)
--- PASS: TestCertOptions (50.66s)

                                                
                                    
x
+
TestCertExpiration (352.4s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-391329 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-391329 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (58.248115173s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-391329 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-391329 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m53.283691007s)
helpers_test.go:176: Cleaning up "cert-expiration-391329" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-391329
--- PASS: TestCertExpiration (352.40s)

                                                
                                    
x
+
TestForceSystemdFlag (60.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-547539 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1212 20:31:04.848360  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-547539 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (58.831839954s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-547539 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-547539" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-547539
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-547539: (1.191563007s)
--- PASS: TestForceSystemdFlag (60.23s)

                                                
                                    
x
+
TestForceSystemdEnv (55.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-370330 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-370330 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.13601037s)
helpers_test.go:176: Cleaning up "force-systemd-env-370330" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-370330
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-370330: (1.210432806s)
--- PASS: TestForceSystemdEnv (55.35s)

                                                
                                    
x
+
TestErrorSpam/setup (40.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-536462 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-536462 --driver=kvm2  --container-runtime=crio
E1212 19:37:13.408775  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:13.415230  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:13.426676  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:13.448251  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:13.489677  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:13.571278  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:13.732838  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:14.054643  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:14.696819  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:15.978461  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:18.541399  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:23.663455  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:37:33.905739  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-536462 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-536462 --driver=kvm2  --container-runtime=crio: (40.041653274s)
--- PASS: TestErrorSpam/setup (40.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (87.31s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 stop
E1212 19:37:54.387518  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:38:35.350735  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 stop: (1m23.749816728s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 stop: (1.705149982s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-536462 --log_dir /tmp/nospam-536462 stop: (1.849984208s)
--- PASS: TestErrorSpam/stop (87.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/test/nested/copy/139995/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-202590 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1212 19:39:57.275243  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-202590 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m17.859520081s)
--- PASS: TestFunctional/serial/StartWithProxy (77.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (52.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 19:40:33.323084  139995 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-202590 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-202590 --alsologtostderr -v=8: (52.30614877s)
functional_test.go:678: soft start took 52.306849744s for "functional-202590" cluster.
I1212 19:41:25.629586  139995 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (52.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-202590 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-202590 cache add registry.k8s.io/pause:3.1: (1.099132639s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-202590 cache add registry.k8s.io/pause:3.3: (1.106742038s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-202590 cache add registry.k8s.io/pause:latest: (1.047862606s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-202590 /tmp/TestFunctionalserialCacheCmdcacheadd_local862425297/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 cache add minikube-local-cache-test:functional-202590
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-202590 cache add minikube-local-cache-test:functional-202590: (1.900174964s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 cache delete minikube-local-cache-test:functional-202590
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-202590
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (177.951256ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 kubectl -- --context functional-202590 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-202590 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.94s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-202590 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 19:42:13.408673  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-202590 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.940328887s)
functional_test.go:776: restart took 41.940455162s for "functional-202590" cluster.
I1212 19:42:15.372140  139995 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (41.94s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-202590 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-202590 logs: (1.23416837s)
--- PASS: TestFunctional/serial/LogsCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 logs --file /tmp/TestFunctionalserialLogsFileCmd3578772419/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-202590 logs --file /tmp/TestFunctionalserialLogsFileCmd3578772419/001/logs.txt: (1.237761211s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-202590 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-202590
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-202590: exit status 115 (302.273616ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.110:30559 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-202590 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 config get cpus: exit status 14 (78.717387ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 config get cpus: exit status 14 (66.869991ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (26.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-202590 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-202590 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 146171: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (26.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-202590 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-202590 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (123.398527ms)

                                                
                                                
-- stdout --
	* [functional-202590] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:42:23.846536  146028 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:42:23.846793  146028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:42:23.846807  146028 out.go:374] Setting ErrFile to fd 2...
	I1212 19:42:23.846814  146028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:42:23.847048  146028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 19:42:23.847470  146028 out.go:368] Setting JSON to false
	I1212 19:42:23.848349  146028 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5084,"bootTime":1765563460,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:42:23.848406  146028 start.go:143] virtualization: kvm guest
	I1212 19:42:23.850197  146028 out.go:179] * [functional-202590] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:42:23.851383  146028 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:42:23.851371  146028 notify.go:221] Checking for updates...
	I1212 19:42:23.852488  146028 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:42:23.853631  146028 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 19:42:23.854734  146028 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:42:23.855798  146028 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:42:23.856969  146028 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:42:23.858426  146028 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:42:23.858947  146028 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:42:23.894073  146028 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 19:42:23.895036  146028 start.go:309] selected driver: kvm2
	I1212 19:42:23.895051  146028 start.go:927] validating driver "kvm2" against &{Name:functional-202590 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-202590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:42:23.895187  146028 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:42:23.897252  146028 out.go:203] 
	W1212 19:42:23.898311  146028 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 19:42:23.899268  146028 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-202590 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-202590 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-202590 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (117.861665ms)

                                                
                                                
-- stdout --
	* [functional-202590] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:42:23.724336  146012 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:42:23.724423  146012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:42:23.724427  146012 out.go:374] Setting ErrFile to fd 2...
	I1212 19:42:23.724431  146012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:42:23.724729  146012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 19:42:23.725151  146012 out.go:368] Setting JSON to false
	I1212 19:42:23.725970  146012 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5084,"bootTime":1765563460,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:42:23.726026  146012 start.go:143] virtualization: kvm guest
	I1212 19:42:23.727971  146012 out.go:179] * [functional-202590] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1212 19:42:23.729269  146012 notify.go:221] Checking for updates...
	I1212 19:42:23.729273  146012 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:42:23.730621  146012 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:42:23.731744  146012 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 19:42:23.732931  146012 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:42:23.733994  146012 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:42:23.734997  146012 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:42:23.736398  146012 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:42:23.736866  146012 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:42:23.768177  146012 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1212 19:42:23.769232  146012 start.go:309] selected driver: kvm2
	I1212 19:42:23.769251  146012 start.go:927] validating driver "kvm2" against &{Name:functional-202590 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-202590 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:42:23.769392  146012 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:42:23.771346  146012 out.go:203] 
	W1212 19:42:23.772424  146012 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 19:42:23.773584  146012 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-202590 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-202590 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-pzgr6" [2c03e32a-cecf-4ff2-9938-986229d4c576] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
2025/12/12 19:42:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "hello-node-connect-7d85dfc575-pzgr6" [2c03e32a-cecf-4ff2-9938-986229d4c576] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 29.022359956s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.110:31128
functional_test.go:1680: http://192.168.39.110:31128: success! body:
Request served by hello-node-connect-7d85dfc575-pzgr6

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.110:31128
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (29.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [0872a48a-77da-45c7-af24-f8024a75f81a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004342969s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-202590 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-202590 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-202590 get pvc myclaim -o=json
I1212 19:42:31.509939  139995 retry.go:31] will retry after 1.644519279s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:ecf948c7-97e9-481e-9460-746259c1b939 ResourceVersion:810 Generation:0 CreationTimestamp:2025-12-12 19:42:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-ecf948c7-97e9-481e-9460-746259c1b939 StorageClassName:0xc001c482b0 VolumeMode:0xc001c482c0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-202590 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-202590 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [b675be73-6682-42db-8f73-513993bf93cd] Pending
helpers_test.go:353: "sp-pod" [b675be73-6682-42db-8f73-513993bf93cd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [b675be73-6682-42db-8f73-513993bf93cd] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 40.004603297s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-202590 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-202590 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-202590 apply -f testdata/storage-provisioner/pod.yaml
I1212 19:43:14.366680  139995 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [8eac7ef9-fccf-4a00-815d-10a1830febf1] Pending
helpers_test.go:353: "sp-pod" [8eac7ef9-fccf-4a00-815d-10a1830febf1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00523008s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-202590 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.24s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh -n functional-202590 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 cp functional-202590:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd424237512/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh -n functional-202590 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh -n functional-202590 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (40.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-202590 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-6frp8" [3b789efc-b311-4932-ba98-e3a6062550d8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-6frp8" [3b789efc-b311-4932-ba98-e3a6062550d8] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 36.007157724s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-202590 exec mysql-6bcdcbc558-6frp8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-202590 exec mysql-6bcdcbc558-6frp8 -- mysql -ppassword -e "show databases;": exit status 1 (202.666317ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:43:13.922272  139995 retry.go:31] will retry after 1.334276777s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-202590 exec mysql-6bcdcbc558-6frp8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-202590 exec mysql-6bcdcbc558-6frp8 -- mysql -ppassword -e "show databases;": exit status 1 (143.211546ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:43:15.400483  139995 retry.go:31] will retry after 2.038719023s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-202590 exec mysql-6bcdcbc558-6frp8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (40.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/139995/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo cat /etc/test/nested/copy/139995/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/139995.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo cat /etc/ssl/certs/139995.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/139995.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo cat /usr/share/ca-certificates/139995.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1399952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo cat /etc/ssl/certs/1399952.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1399952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo cat /usr/share/ca-certificates/1399952.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-202590 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 ssh "sudo systemctl is-active docker": exit status 1 (202.81699ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 ssh "sudo systemctl is-active containerd": exit status 1 (182.131842ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-202590 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-202590 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-rfgs8" [a1ca5560-32ef-404d-a710-453c01dd9a49] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-rfgs8" [a1ca5560-32ef-404d-a710-453c01dd9a49] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.005457155s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "387.015211ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.806519ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdany-port1132985573/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765568542839769250" to /tmp/TestFunctionalparallelMountCmdany-port1132985573/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765568542839769250" to /tmp/TestFunctionalparallelMountCmdany-port1132985573/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765568542839769250" to /tmp/TestFunctionalparallelMountCmdany-port1132985573/001/test-1765568542839769250
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.971943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:42:23.086095  139995 retry.go:31] will retry after 697.667004ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 19:42 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 19:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 19:42 test-1765568542839769250
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh cat /mount-9p/test-1765568542839769250
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-202590 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [bc8fe315-3604-4920-a37e-b9a9209ae968] Pending
helpers_test.go:353: "busybox-mount" [bc8fe315-3604-4920-a37e-b9a9209ae968] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [bc8fe315-3604-4920-a37e-b9a9209ae968] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [bc8fe315-3604-4920-a37e-b9a9209ae968] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004783991s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-202590 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdany-port1132985573/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "272.870071ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.125037ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 service list -o json
functional_test.go:1504: Took "441.660876ms" to run "out/minikube-linux-amd64 -p functional-202590 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.110:31041
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.110:31041
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdspecific-port3107351371/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T /mount-9p | grep 9p"
I1212 19:42:33.364917  139995 detect.go:223] nested VM detected
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (178.419783ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:42:33.402944  139995 retry.go:31] will retry after 526.385741ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdspecific-port3107351371/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 ssh "sudo umount -f /mount-9p": exit status 1 (157.144067ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-202590 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdspecific-port3107351371/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-202590 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-202590
localhost/kicbase/echo-server:functional-202590
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-202590 image ls --format short --alsologtostderr:
I1212 19:42:50.883167  146988 out.go:360] Setting OutFile to fd 1 ...
I1212 19:42:50.883452  146988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:42:50.883463  146988 out.go:374] Setting ErrFile to fd 2...
I1212 19:42:50.883467  146988 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:42:50.883725  146988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:42:50.884327  146988 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:42:50.884434  146988 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:42:50.886570  146988 ssh_runner.go:195] Run: systemctl --version
I1212 19:42:50.888592  146988 main.go:143] libmachine: domain functional-202590 has defined MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:42:50.888935  146988 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1d:6b:98", ip: ""} in network mk-functional-202590: {Iface:virbr1 ExpiryTime:2025-12-12 20:39:29 +0000 UTC Type:0 Mac:52:54:00:1d:6b:98 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-202590 Clientid:01:52:54:00:1d:6b:98}
I1212 19:42:50.888969  146988 main.go:143] libmachine: domain functional-202590 has defined IP address 192.168.39.110 and MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:42:50.889127  146988 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-202590/id_rsa Username:docker}
I1212 19:42:50.971703  146988 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-202590 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-202590  │ 9056ab77afb8e │ 4.95MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-202590  │ 889cd0ae119bc │ 1.47MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ localhost/minikube-local-cache-test     │ functional-202590  │ 0b91465fa5f77 │ 3.33kB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-202590 image ls --format table --alsologtostderr:
I1212 19:43:06.392097  147118 out.go:360] Setting OutFile to fd 1 ...
I1212 19:43:06.392413  147118 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:06.392423  147118 out.go:374] Setting ErrFile to fd 2...
I1212 19:43:06.392429  147118 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:06.392642  147118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:43:06.393211  147118 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:43:06.393331  147118 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:43:06.395563  147118 ssh_runner.go:195] Run: systemctl --version
I1212 19:43:06.397987  147118 main.go:143] libmachine: domain functional-202590 has defined MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:43:06.398407  147118 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1d:6b:98", ip: ""} in network mk-functional-202590: {Iface:virbr1 ExpiryTime:2025-12-12 20:39:29 +0000 UTC Type:0 Mac:52:54:00:1d:6b:98 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-202590 Clientid:01:52:54:00:1d:6b:98}
I1212 19:43:06.398441  147118 main.go:143] libmachine: domain functional-202590 has defined IP address 192.168.39.110 and MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:43:06.398579  147118 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-202590/id_rsa Username:docker}
I1212 19:43:06.491145  147118 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-202590 image ls --format json --alsologtostderr:
[{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9a
a0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"889cd0ae119bc957b9379804d2c5c560416218fd9d7fb1566c89833e2133d07e","repoDigests":["localhost/my-image@sha256:564456fc15ad0f4c0146bd003e91108cdd4d85fa21e93bc45092b5836066e5ac"],"repoTags":["localhost/my-image:functional-202590"],"size":"1468599"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"
],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io
/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529
bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{
"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0b91465fa5f77242e2ec10e45254ab894dc7327c844d62ecc20d4520a2b24334","repoDigests":["localhost/minikube-local-cache-test@sha256:4a3f9d2d99177eabc5081c3acdd92a77033341058aa2ad1feb745310410b7110"],"repoTags":["localhost/minikube-local-cache-test:functional-202590"],"size":"3330"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"0184c1613d9293
1126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functio
nal-202590"],"size":"4945146"},{"id":"70df4575e04854dd5853a6c0470908f966cf39a5822cf059c340a6dc3a85a702","repoDigests":["docker.io/library/9163bd8ac093dd7a132dde396e94f2de6d866699730f9d36206c03d11443f2aa-tmp@sha256:dd1fdf1186cc83873e5b28263b34ced092d9852375d6c212953b932f0d9729bc"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-202590 image ls --format json --alsologtostderr:
I1212 19:43:06.176774  147107 out.go:360] Setting OutFile to fd 1 ...
I1212 19:43:06.177019  147107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:06.177027  147107 out.go:374] Setting ErrFile to fd 2...
I1212 19:43:06.177036  147107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:43:06.177298  147107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:43:06.177935  147107 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:43:06.178034  147107 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:43:06.180099  147107 ssh_runner.go:195] Run: systemctl --version
I1212 19:43:06.182414  147107 main.go:143] libmachine: domain functional-202590 has defined MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:43:06.182890  147107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1d:6b:98", ip: ""} in network mk-functional-202590: {Iface:virbr1 ExpiryTime:2025-12-12 20:39:29 +0000 UTC Type:0 Mac:52:54:00:1d:6b:98 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-202590 Clientid:01:52:54:00:1d:6b:98}
I1212 19:43:06.182927  147107 main.go:143] libmachine: domain functional-202590 has defined IP address 192.168.39.110 and MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:43:06.183105  147107 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-202590/id_rsa Username:docker}
I1212 19:43:06.277277  147107 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-202590 image ls --format yaml --alsologtostderr:
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 0b91465fa5f77242e2ec10e45254ab894dc7327c844d62ecc20d4520a2b24334
repoDigests:
- localhost/minikube-local-cache-test@sha256:4a3f9d2d99177eabc5081c3acdd92a77033341058aa2ad1feb745310410b7110
repoTags:
- localhost/minikube-local-cache-test:functional-202590
size: "3330"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-202590
size: "4945146"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-202590 image ls --format yaml --alsologtostderr:
I1212 19:42:51.070533  146999 out.go:360] Setting OutFile to fd 1 ...
I1212 19:42:51.070834  146999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:42:51.070845  146999 out.go:374] Setting ErrFile to fd 2...
I1212 19:42:51.070849  146999 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:42:51.071157  146999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:42:51.071848  146999 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:42:51.071968  146999 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:42:51.074031  146999 ssh_runner.go:195] Run: systemctl --version
I1212 19:42:51.076079  146999 main.go:143] libmachine: domain functional-202590 has defined MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:42:51.076466  146999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1d:6b:98", ip: ""} in network mk-functional-202590: {Iface:virbr1 ExpiryTime:2025-12-12 20:39:29 +0000 UTC Type:0 Mac:52:54:00:1d:6b:98 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-202590 Clientid:01:52:54:00:1d:6b:98}
I1212 19:42:51.076501  146999 main.go:143] libmachine: domain functional-202590 has defined IP address 192.168.39.110 and MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:42:51.076652  146999 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-202590/id_rsa Username:docker}
I1212 19:42:51.161809  146999 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (14.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 ssh pgrep buildkitd: exit status 1 (156.113274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image build -t localhost/my-image:functional-202590 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-202590 image build -t localhost/my-image:functional-202590 testdata/build --alsologtostderr: (14.418695461s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-202590 image build -t localhost/my-image:functional-202590 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 70df4575e04
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-202590
--> 889cd0ae119
Successfully tagged localhost/my-image:functional-202590
889cd0ae119bc957b9379804d2c5c560416218fd9d7fb1566c89833e2133d07e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-202590 image build -t localhost/my-image:functional-202590 testdata/build --alsologtostderr:
I1212 19:42:51.417486  147037 out.go:360] Setting OutFile to fd 1 ...
I1212 19:42:51.417760  147037 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:42:51.417769  147037 out.go:374] Setting ErrFile to fd 2...
I1212 19:42:51.417774  147037 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:42:51.417946  147037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:42:51.418475  147037 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:42:51.419208  147037 config.go:182] Loaded profile config "functional-202590": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 19:42:51.421465  147037 ssh_runner.go:195] Run: systemctl --version
I1212 19:42:51.423845  147037 main.go:143] libmachine: domain functional-202590 has defined MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:42:51.424314  147037 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1d:6b:98", ip: ""} in network mk-functional-202590: {Iface:virbr1 ExpiryTime:2025-12-12 20:39:29 +0000 UTC Type:0 Mac:52:54:00:1d:6b:98 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:functional-202590 Clientid:01:52:54:00:1d:6b:98}
I1212 19:42:51.424344  147037 main.go:143] libmachine: domain functional-202590 has defined IP address 192.168.39.110 and MAC address 52:54:00:1d:6b:98 in network mk-functional-202590
I1212 19:42:51.424520  147037 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-202590/id_rsa Username:docker}
I1212 19:42:51.508041  147037 build_images.go:162] Building image from path: /tmp/build.2342775340.tar
I1212 19:42:51.508173  147037 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 19:42:51.520951  147037 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2342775340.tar
I1212 19:42:51.526076  147037 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2342775340.tar: stat -c "%s %y" /var/lib/minikube/build/build.2342775340.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2342775340.tar': No such file or directory
I1212 19:42:51.526123  147037 ssh_runner.go:362] scp /tmp/build.2342775340.tar --> /var/lib/minikube/build/build.2342775340.tar (3072 bytes)
I1212 19:42:51.555660  147037 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2342775340
I1212 19:42:51.567201  147037 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2342775340 -xf /var/lib/minikube/build/build.2342775340.tar
I1212 19:42:51.578285  147037 crio.go:315] Building image: /var/lib/minikube/build/build.2342775340
I1212 19:42:51.578352  147037 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-202590 /var/lib/minikube/build/build.2342775340 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 19:43:05.691830  147037 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-202590 /var/lib/minikube/build/build.2342775340 --cgroup-manager=cgroupfs: (14.113450314s)
I1212 19:43:05.691926  147037 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2342775340
I1212 19:43:05.723474  147037 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2342775340.tar
I1212 19:43:05.751420  147037 build_images.go:218] Built localhost/my-image:functional-202590 from /tmp/build.2342775340.tar
I1212 19:43:05.751480  147037 build_images.go:134] succeeded building to: functional-202590
I1212 19:43:05.751486  147037 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (14.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.946466769s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-202590
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817850003/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817850003/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817850003/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T" /mount1: exit status 1 (177.722606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:42:34.787683  139995 retry.go:31] will retry after 377.819813ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-202590 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817850003/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817850003/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-202590 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817850003/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image load --daemon kicbase/echo-server:functional-202590 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-202590 image load --daemon kicbase/echo-server:functional-202590 --alsologtostderr: (1.34447023s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image load --daemon kicbase/echo-server:functional-202590 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-202590 image load --daemon kicbase/echo-server:functional-202590 --alsologtostderr: (2.608929577s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-202590
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image load --daemon kicbase/echo-server:functional-202590 --alsologtostderr
E1212 19:42:41.117538  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image save kicbase/echo-server:functional-202590 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image rm kicbase/echo-server:functional-202590 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-202590
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-202590 image save --daemon kicbase/echo-server:functional-202590 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-202590
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-202590
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-202590
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-202590
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22112-135957/.minikube/files/etc/test/nested/copy/139995/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (77.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066499 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-066499 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m17.805368665s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (77.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (36.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1212 19:44:40.434237  139995 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066499 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-066499 --alsologtostderr -v=8: (36.035505669s)
functional_test.go:678: soft start took 36.035919228s for "functional-066499" cluster.
I1212 19:45:16.470132  139995 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (36.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-066499 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 cache add registry.k8s.io/pause:3.1: (1.133256414s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 cache add registry.k8s.io/pause:3.3: (1.064059091s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 cache add registry.k8s.io/pause:latest: (1.056725328s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3238337821/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 cache add minikube-local-cache-test:functional-066499
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 cache add minikube-local-cache-test:functional-066499: (1.916584395s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 cache delete minikube-local-cache-test:functional-066499
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-066499
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (172.988427ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 kubectl -- --context functional-066499 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-066499 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (33.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066499 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-066499 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.659691469s)
functional_test.go:776: restart took 33.659826091s for "functional-066499" cluster.
I1212 19:45:57.901528  139995 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (33.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-066499 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 logs: (1.214676889s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs225833212/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs225833212/001/logs.txt: (1.243748221s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-066499 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-066499
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-066499: exit status 115 (230.672476ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.247:31452 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-066499 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 config get cpus: exit status 14 (72.050649ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 config get cpus: exit status 14 (61.25504ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (16.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-066499 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-066499 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 149415: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (16.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066499 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-066499 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (121.816813ms)

                                                
                                                
-- stdout --
	* [functional-066499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:46:16.009507  149259 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:46:16.009842  149259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:46:16.009853  149259 out.go:374] Setting ErrFile to fd 2...
	I1212 19:46:16.009860  149259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:46:16.010177  149259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 19:46:16.010783  149259 out.go:368] Setting JSON to false
	I1212 19:46:16.011945  149259 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5316,"bootTime":1765563460,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:46:16.012020  149259 start.go:143] virtualization: kvm guest
	I1212 19:46:16.014200  149259 out.go:179] * [functional-066499] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 19:46:16.015390  149259 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:46:16.015405  149259 notify.go:221] Checking for updates...
	I1212 19:46:16.017622  149259 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:46:16.018761  149259 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 19:46:16.019832  149259 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:46:16.020797  149259 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:46:16.021677  149259 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:46:16.025876  149259 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 19:46:16.026579  149259 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:46:16.059314  149259 out.go:179] * Using the kvm2 driver based on existing profile
	I1212 19:46:16.060366  149259 start.go:309] selected driver: kvm2
	I1212 19:46:16.060384  149259 start.go:927] validating driver "kvm2" against &{Name:functional-066499 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-066499 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:46:16.060527  149259 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:46:16.062784  149259 out.go:203] 
	W1212 19:46:16.063852  149259 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 19:46:16.064872  149259 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066499 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-066499 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-066499 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (119.767029ms)

                                                
                                                
-- stdout --
	* [functional-066499] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:46:16.248669  149290 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:46:16.249037  149290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:46:16.249055  149290 out.go:374] Setting ErrFile to fd 2...
	I1212 19:46:16.249063  149290 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:46:16.249563  149290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 19:46:16.250286  149290 out.go:368] Setting JSON to false
	I1212 19:46:16.251580  149290 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5316,"bootTime":1765563460,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 19:46:16.251663  149290 start.go:143] virtualization: kvm guest
	I1212 19:46:16.253544  149290 out.go:179] * [functional-066499] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1212 19:46:16.254815  149290 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 19:46:16.254830  149290 notify.go:221] Checking for updates...
	I1212 19:46:16.257287  149290 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 19:46:16.258376  149290 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 19:46:16.259490  149290 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 19:46:16.260709  149290 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 19:46:16.264341  149290 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 19:46:16.266265  149290 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 19:46:16.266957  149290 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 19:46:16.298087  149290 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1212 19:46:16.299449  149290 start.go:309] selected driver: kvm2
	I1212 19:46:16.299468  149290 start.go:927] validating driver "kvm2" against &{Name:functional-066499 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-066499 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.247 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 19:46:16.299612  149290 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 19:46:16.301525  149290 out.go:203] 
	W1212 19:46:16.302570  149290 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 19:46:16.303561  149290 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (21.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-066499 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-066499 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-5rzzz" [14aba759-e7f4-4883-aaa4-4562d8f53d14] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-5rzzz" [14aba759-e7f4-4883-aaa4-4562d8f53d14] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.199596306s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.247:31945
functional_test.go:1680: http://192.168.39.247:31945: success! body:
Request served by hello-node-connect-9f67c86d4-5rzzz

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.247:31945
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (21.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (36.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [a6afaeb8-042a-4b96-b24d-6ba4435c5c47] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007550122s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-066499 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-066499 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-066499 get pvc myclaim -o=json
I1212 19:46:13.312342  139995 retry.go:31] will retry after 1.020729792s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:52cb1c81-eb98-4a8b-a50a-0c6ebe6711f3 ResourceVersion:760 Generation:0 CreationTimestamp:2025-12-12 19:46:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0013cc740 VolumeMode:0xc0013cc750 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-066499 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-066499 apply -f testdata/storage-provisioner/pod.yaml
I1212 19:46:14.563495  139995 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7558594b-b14c-47c9-8477-e3f4aef5450c] Pending
helpers_test.go:353: "sp-pod" [7558594b-b14c-47c9-8477-e3f4aef5450c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [7558594b-b14c-47c9-8477-e3f4aef5450c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.374655633s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-066499 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-066499 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-066499 delete -f testdata/storage-provisioner/pod.yaml: (1.001683856s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-066499 apply -f testdata/storage-provisioner/pod.yaml
I1212 19:46:38.307099  139995 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [d59e3666-8d29-4779-a5f4-ab2184ff8042] Pending
helpers_test.go:353: "sp-pod" [d59e3666-8d29-4779-a5f4-ab2184ff8042] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004849742s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-066499 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (36.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh -n functional-066499 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 cp functional-066499:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2771528057/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh -n functional-066499 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh -n functional-066499 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (33.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-066499 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-bs8mx" [cb34b7b7-4837-4fbe-b16b-0e58a8a4a5db] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-bs8mx" [cb34b7b7-4837-4fbe-b16b-0e58a8a4a5db] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 24.004060023s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-066499 exec mysql-7d7b65bc95-bs8mx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-066499 exec mysql-7d7b65bc95-bs8mx -- mysql -ppassword -e "show databases;": exit status 1 (209.455952ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:46:30.467131  139995 retry.go:31] will retry after 571.71653ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-066499 exec mysql-7d7b65bc95-bs8mx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-066499 exec mysql-7d7b65bc95-bs8mx -- mysql -ppassword -e "show databases;": exit status 1 (156.744568ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:46:31.196213  139995 retry.go:31] will retry after 2.171898045s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-066499 exec mysql-7d7b65bc95-bs8mx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-066499 exec mysql-7d7b65bc95-bs8mx -- mysql -ppassword -e "show databases;": exit status 1 (286.053007ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:46:33.654734  139995 retry.go:31] will retry after 2.654174s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-066499 exec mysql-7d7b65bc95-bs8mx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-066499 exec mysql-7d7b65bc95-bs8mx -- mysql -ppassword -e "show databases;": exit status 1 (792.266835ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1212 19:46:37.102048  139995 retry.go:31] will retry after 2.545461293s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-066499 exec mysql-7d7b65bc95-bs8mx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (33.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/139995/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo cat /etc/test/nested/copy/139995/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/139995.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo cat /etc/ssl/certs/139995.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/139995.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo cat /usr/share/ca-certificates/139995.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1399952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo cat /etc/ssl/certs/1399952.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1399952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo cat /usr/share/ca-certificates/1399952.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-066499 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 ssh "sudo systemctl is-active docker": exit status 1 (174.456107ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 ssh "sudo systemctl is-active containerd": exit status 1 (179.117023ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-066499 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-066499 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-sclcq" [9987171f-b882-4b44-b763-509c147ded14] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-sclcq" [9987171f-b882-4b44-b763-509c147ded14] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.006178854s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls --format short --alsologtostderr
2025/12/12 19:46:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066499 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-066499
localhost/kicbase/echo-server:functional-066499
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066499 image ls --format short --alsologtostderr:
I1212 19:46:40.684355  149681 out.go:360] Setting OutFile to fd 1 ...
I1212 19:46:40.684641  149681 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:40.684651  149681 out.go:374] Setting ErrFile to fd 2...
I1212 19:46:40.684656  149681 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:40.684861  149681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:46:40.685425  149681 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:40.685520  149681 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:40.687928  149681 ssh_runner.go:195] Run: systemctl --version
I1212 19:46:40.690552  149681 main.go:143] libmachine: domain functional-066499 has defined MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:40.691000  149681 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:80:4f", ip: ""} in network mk-functional-066499: {Iface:virbr1 ExpiryTime:2025-12-12 20:43:37 +0000 UTC Type:0 Mac:52:54:00:c0:80:4f Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-066499 Clientid:01:52:54:00:c0:80:4f}
I1212 19:46:40.691028  149681 main.go:143] libmachine: domain functional-066499 has defined IP address 192.168.39.247 and MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:40.691206  149681 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-066499/id_rsa Username:docker}
I1212 19:46:40.783073  149681 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066499 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/minikube-local-cache-test     │ functional-066499  │ 0b91465fa5f77 │ 3.33kB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-066499  │ 9056ab77afb8e │ 4.94MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066499 image ls --format table --alsologtostderr:
I1212 19:46:41.027994  149698 out.go:360] Setting OutFile to fd 1 ...
I1212 19:46:41.028149  149698 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:41.028162  149698 out.go:374] Setting ErrFile to fd 2...
I1212 19:46:41.028169  149698 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:41.028418  149698 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:46:41.029001  149698 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:41.029153  149698 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:41.031323  149698 ssh_runner.go:195] Run: systemctl --version
I1212 19:46:41.033613  149698 main.go:143] libmachine: domain functional-066499 has defined MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:41.034063  149698 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:80:4f", ip: ""} in network mk-functional-066499: {Iface:virbr1 ExpiryTime:2025-12-12 20:43:37 +0000 UTC Type:0 Mac:52:54:00:c0:80:4f Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-066499 Clientid:01:52:54:00:c0:80:4f}
I1212 19:46:41.034124  149698 main.go:143] libmachine: domain functional-066499 has defined IP address 192.168.39.247 and MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:41.034308  149698 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-066499/id_rsa Username:docker}
I1212 19:46:41.142237  149698 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066499 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-066499"],"size":"4943877"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa3
8e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kin
dnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"siz
e":"803724943"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"0b91465fa5f77242e2ec10e45254ab894dc7327c844d62ecc20d4520a2b24334","repoDigests":["localhost/minikube-local-cache-test@sha256:4a3f9d2d99177eabc5081c3acdd92a77033341058aa2ad1feb745310410b7110"],"repoTags":["localhost/minikube-local-cache-test:functional-066499"],"size":"3330"},{"id":"aa5e3ebc0dfed056680518
6b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],
"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@
sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066499 image ls --format json --alsologtostderr:
I1212 19:46:41.018062  149692 out.go:360] Setting OutFile to fd 1 ...
I1212 19:46:41.018409  149692 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:41.018421  149692 out.go:374] Setting ErrFile to fd 2...
I1212 19:46:41.018428  149692 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:41.018762  149692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:46:41.019636  149692 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:41.019781  149692 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:41.022717  149692 ssh_runner.go:195] Run: systemctl --version
I1212 19:46:41.025561  149692 main.go:143] libmachine: domain functional-066499 has defined MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:41.026041  149692 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:80:4f", ip: ""} in network mk-functional-066499: {Iface:virbr1 ExpiryTime:2025-12-12 20:43:37 +0000 UTC Type:0 Mac:52:54:00:c0:80:4f Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-066499 Clientid:01:52:54:00:c0:80:4f}
I1212 19:46:41.026085  149692 main.go:143] libmachine: domain functional-066499 has defined IP address 192.168.39.247 and MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:41.026299  149692 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-066499/id_rsa Username:docker}
I1212 19:46:41.117513  149692 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066499 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 0b91465fa5f77242e2ec10e45254ab894dc7327c844d62ecc20d4520a2b24334
repoDigests:
- localhost/minikube-local-cache-test@sha256:4a3f9d2d99177eabc5081c3acdd92a77033341058aa2ad1feb745310410b7110
repoTags:
- localhost/minikube-local-cache-test:functional-066499
size: "3330"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-066499
size: "4943877"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066499 image ls --format yaml --alsologtostderr:
I1212 19:46:41.259438  149711 out.go:360] Setting OutFile to fd 1 ...
I1212 19:46:41.259717  149711 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:41.259726  149711 out.go:374] Setting ErrFile to fd 2...
I1212 19:46:41.259731  149711 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:41.260682  149711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:46:41.261790  149711 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:41.261916  149711 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:41.263935  149711 ssh_runner.go:195] Run: systemctl --version
I1212 19:46:41.266574  149711 main.go:143] libmachine: domain functional-066499 has defined MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:41.266976  149711 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:80:4f", ip: ""} in network mk-functional-066499: {Iface:virbr1 ExpiryTime:2025-12-12 20:43:37 +0000 UTC Type:0 Mac:52:54:00:c0:80:4f Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-066499 Clientid:01:52:54:00:c0:80:4f}
I1212 19:46:41.267005  149711 main.go:143] libmachine: domain functional-066499 has defined IP address 192.168.39.247 and MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:41.267146  149711 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-066499/id_rsa Username:docker}
I1212 19:46:41.356361  149711 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 ssh pgrep buildkitd: exit status 1 (174.848464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image build -t localhost/my-image:functional-066499 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 image build -t localhost/my-image:functional-066499 testdata/build --alsologtostderr: (3.728009191s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-066499 image build -t localhost/my-image:functional-066499 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 39a64ae82a9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-066499
--> f25efdcb7b7
Successfully tagged localhost/my-image:functional-066499
f25efdcb7b7fb0d3320e0d123b70e937c55971b2b3f463b18e699ae92cfb4ea3
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-066499 image build -t localhost/my-image:functional-066499 testdata/build --alsologtostderr:
I1212 19:46:41.469672  149733 out.go:360] Setting OutFile to fd 1 ...
I1212 19:46:41.469961  149733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:41.469972  149733 out.go:374] Setting ErrFile to fd 2...
I1212 19:46:41.469976  149733 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 19:46:41.470245  149733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
I1212 19:46:41.470903  149733 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:41.471621  149733 config.go:182] Loaded profile config "functional-066499": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 19:46:41.473974  149733 ssh_runner.go:195] Run: systemctl --version
I1212 19:46:41.476215  149733 main.go:143] libmachine: domain functional-066499 has defined MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:41.476632  149733 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c0:80:4f", ip: ""} in network mk-functional-066499: {Iface:virbr1 ExpiryTime:2025-12-12 20:43:37 +0000 UTC Type:0 Mac:52:54:00:c0:80:4f Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:functional-066499 Clientid:01:52:54:00:c0:80:4f}
I1212 19:46:41.476663  149733 main.go:143] libmachine: domain functional-066499 has defined IP address 192.168.39.247 and MAC address 52:54:00:c0:80:4f in network mk-functional-066499
I1212 19:46:41.476825  149733 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/functional-066499/id_rsa Username:docker}
I1212 19:46:41.557354  149733 build_images.go:162] Building image from path: /tmp/build.564332689.tar
I1212 19:46:41.557445  149733 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 19:46:41.569481  149733 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.564332689.tar
I1212 19:46:41.574319  149733 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.564332689.tar: stat -c "%s %y" /var/lib/minikube/build/build.564332689.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.564332689.tar': No such file or directory
I1212 19:46:41.574355  149733 ssh_runner.go:362] scp /tmp/build.564332689.tar --> /var/lib/minikube/build/build.564332689.tar (3072 bytes)
I1212 19:46:41.615820  149733 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.564332689
I1212 19:46:41.628724  149733 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.564332689 -xf /var/lib/minikube/build/build.564332689.tar
I1212 19:46:41.640513  149733 crio.go:315] Building image: /var/lib/minikube/build/build.564332689
I1212 19:46:41.640610  149733 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-066499 /var/lib/minikube/build/build.564332689 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1212 19:46:45.103466  149733 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-066499 /var/lib/minikube/build/build.564332689 --cgroup-manager=cgroupfs: (3.462803758s)
I1212 19:46:45.103534  149733 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.564332689
I1212 19:46:45.116520  149733 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.564332689.tar
I1212 19:46:45.133131  149733 build_images.go:218] Built localhost/my-image:functional-066499 from /tmp/build.564332689.tar
I1212 19:46:45.133169  149733 build_images.go:134] succeeded building to: functional-066499
I1212 19:46:45.133174  149733 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-066499
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image load --daemon kicbase/echo-server:functional-066499 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 image load --daemon kicbase/echo-server:functional-066499 --alsologtostderr: (1.151255644s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "250.695979ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.313176ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "319.288079ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "73.395644ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image load --daemon kicbase/echo-server:functional-066499 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-066499
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image load --daemon kicbase/echo-server:functional-066499 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image save kicbase/echo-server:functional-066499 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (2.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image rm kicbase/echo-server:functional-066499 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 image rm kicbase/echo-server:functional-066499 --alsologtostderr: (1.572859129s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (2.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (5.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.446817132s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (5.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 service list -o json
functional_test.go:1504: Took "261.175576ms" to run "out/minikube-linux-amd64 -p functional-066499 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.247:30789
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.247:30789
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (5.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-066499
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 image save --daemon kicbase/echo-server:functional-066499 --alsologtostderr
functional_test.go:439: (dbg) Done: out/minikube-linux-amd64 -p functional-066499 image save --daemon kicbase/echo-server:functional-066499 --alsologtostderr: (5.501588884s)
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-066499
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (5.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2438283292/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765568797439621116" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2438283292/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765568797439621116" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2438283292/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765568797439621116" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2438283292/001/test-1765568797439621116
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.061915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:46:37.639265  139995 retry.go:31] will retry after 330.923465ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 19:46 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 19:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 19:46 test-1765568797439621116
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh cat /mount-9p/test-1765568797439621116
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-066499 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [ea02459d-d244-4cfd-9f0b-4baecd1cf7dd] Pending
helpers_test.go:353: "busybox-mount" [ea02459d-d244-4cfd-9f0b-4baecd1cf7dd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [ea02459d-d244-4cfd-9f0b-4baecd1cf7dd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [ea02459d-d244-4cfd-9f0b-4baecd1cf7dd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.00438054s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-066499 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2438283292/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2817420323/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (156.735553ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:46:47.396802  139995 retry.go:31] will retry after 516.493001ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2817420323/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 ssh "sudo umount -f /mount-9p": exit status 1 (154.614122ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-066499 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2817420323/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3841783548/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3841783548/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3841783548/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T" /mount1: exit status 1 (172.500728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 19:46:48.757223  139995 retry.go:31] will retry after 525.771631ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-066499 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-066499 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3841783548/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3841783548/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-066499 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3841783548/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-066499
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-066499
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-066499
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1212 19:47:13.399763  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:22.590334  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:22.596725  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:22.608221  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:22.629585  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:22.671176  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:22.752624  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:22.914312  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:23.236059  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:23.877953  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:25.159736  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:27.721851  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:32.844153  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:47:43.086410  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:48:03.567954  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:48:44.529362  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:50:06.451414  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m24.341308255s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (204.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 kubectl -- rollout status deployment/busybox: (4.752550468s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-5rq6f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-qfb76 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-x5j8k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-5rq6f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-qfb76 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-x5j8k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-5rq6f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-qfb76 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-x5j8k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-5rq6f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-5rq6f -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-qfb76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-qfb76 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-x5j8k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 kubectl -- exec busybox-7b57f96db7-x5j8k -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 node add --alsologtostderr -v 5
E1212 19:51:04.848247  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:04.854769  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:04.866124  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:04.887625  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:04.929130  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:05.010587  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:05.172155  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:05.493947  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:06.136000  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:07.418256  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 node add --alsologtostderr -v 5: (44.45991726s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-020641 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1212 19:51:09.980529  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp testdata/cp-test.txt ha-020641:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1475148878/001/cp-test_ha-020641.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641:/home/docker/cp-test.txt ha-020641-m02:/home/docker/cp-test_ha-020641_ha-020641-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m02 "sudo cat /home/docker/cp-test_ha-020641_ha-020641-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641:/home/docker/cp-test.txt ha-020641-m03:/home/docker/cp-test_ha-020641_ha-020641-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m03 "sudo cat /home/docker/cp-test_ha-020641_ha-020641-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641:/home/docker/cp-test.txt ha-020641-m04:/home/docker/cp-test_ha-020641_ha-020641-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m04 "sudo cat /home/docker/cp-test_ha-020641_ha-020641-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp testdata/cp-test.txt ha-020641-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1475148878/001/cp-test_ha-020641-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m02:/home/docker/cp-test.txt ha-020641:/home/docker/cp-test_ha-020641-m02_ha-020641.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641 "sudo cat /home/docker/cp-test_ha-020641-m02_ha-020641.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m02:/home/docker/cp-test.txt ha-020641-m03:/home/docker/cp-test_ha-020641-m02_ha-020641-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m03 "sudo cat /home/docker/cp-test_ha-020641-m02_ha-020641-m03.txt"
E1212 19:51:15.102356  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m02:/home/docker/cp-test.txt ha-020641-m04:/home/docker/cp-test_ha-020641-m02_ha-020641-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m04 "sudo cat /home/docker/cp-test_ha-020641-m02_ha-020641-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp testdata/cp-test.txt ha-020641-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1475148878/001/cp-test_ha-020641-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m03:/home/docker/cp-test.txt ha-020641:/home/docker/cp-test_ha-020641-m03_ha-020641.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641 "sudo cat /home/docker/cp-test_ha-020641-m03_ha-020641.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m03:/home/docker/cp-test.txt ha-020641-m02:/home/docker/cp-test_ha-020641-m03_ha-020641-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m02 "sudo cat /home/docker/cp-test_ha-020641-m03_ha-020641-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m03:/home/docker/cp-test.txt ha-020641-m04:/home/docker/cp-test_ha-020641-m03_ha-020641-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m04 "sudo cat /home/docker/cp-test_ha-020641-m03_ha-020641-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp testdata/cp-test.txt ha-020641-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1475148878/001/cp-test_ha-020641-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m04:/home/docker/cp-test.txt ha-020641:/home/docker/cp-test_ha-020641-m04_ha-020641.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641 "sudo cat /home/docker/cp-test_ha-020641-m04_ha-020641.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m04:/home/docker/cp-test.txt ha-020641-m02:/home/docker/cp-test_ha-020641-m04_ha-020641-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m02 "sudo cat /home/docker/cp-test_ha-020641-m04_ha-020641-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 cp ha-020641-m04:/home/docker/cp-test.txt ha-020641-m03:/home/docker/cp-test_ha-020641-m04_ha-020641-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 ssh -n ha-020641-m03 "sudo cat /home/docker/cp-test_ha-020641-m04_ha-020641-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (76.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 node stop m02 --alsologtostderr -v 5
E1212 19:51:25.344579  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:51:45.826006  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:13.400584  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:22.585196  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:52:26.788010  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 node stop m02 --alsologtostderr -v 5: (1m16.315628579s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5: exit status 7 (474.956374ms)

                                                
                                                
-- stdout --
	ha-020641
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-020641-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-020641-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-020641-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 19:52:37.273533  152947 out.go:360] Setting OutFile to fd 1 ...
	I1212 19:52:37.273742  152947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:52:37.273749  152947 out.go:374] Setting ErrFile to fd 2...
	I1212 19:52:37.273753  152947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 19:52:37.273909  152947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 19:52:37.274064  152947 out.go:368] Setting JSON to false
	I1212 19:52:37.274089  152947 mustload.go:66] Loading cluster: ha-020641
	I1212 19:52:37.274206  152947 notify.go:221] Checking for updates...
	I1212 19:52:37.274450  152947 config.go:182] Loaded profile config "ha-020641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 19:52:37.274466  152947 status.go:174] checking status of ha-020641 ...
	I1212 19:52:37.276379  152947 status.go:371] ha-020641 host status = "Running" (err=<nil>)
	I1212 19:52:37.276397  152947 host.go:66] Checking if "ha-020641" exists ...
	I1212 19:52:37.279080  152947 main.go:143] libmachine: domain ha-020641 has defined MAC address 52:54:00:df:d7:0b in network mk-ha-020641
	I1212 19:52:37.279674  152947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:d7:0b", ip: ""} in network mk-ha-020641: {Iface:virbr1 ExpiryTime:2025-12-12 20:47:05 +0000 UTC Type:0 Mac:52:54:00:df:d7:0b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-020641 Clientid:01:52:54:00:df:d7:0b}
	I1212 19:52:37.279724  152947 main.go:143] libmachine: domain ha-020641 has defined IP address 192.168.39.59 and MAC address 52:54:00:df:d7:0b in network mk-ha-020641
	I1212 19:52:37.279890  152947 host.go:66] Checking if "ha-020641" exists ...
	I1212 19:52:37.280208  152947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:52:37.282519  152947 main.go:143] libmachine: domain ha-020641 has defined MAC address 52:54:00:df:d7:0b in network mk-ha-020641
	I1212 19:52:37.282894  152947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:df:d7:0b", ip: ""} in network mk-ha-020641: {Iface:virbr1 ExpiryTime:2025-12-12 20:47:05 +0000 UTC Type:0 Mac:52:54:00:df:d7:0b Iaid: IPaddr:192.168.39.59 Prefix:24 Hostname:ha-020641 Clientid:01:52:54:00:df:d7:0b}
	I1212 19:52:37.282937  152947 main.go:143] libmachine: domain ha-020641 has defined IP address 192.168.39.59 and MAC address 52:54:00:df:d7:0b in network mk-ha-020641
	I1212 19:52:37.283090  152947 sshutil.go:53] new ssh client: &{IP:192.168.39.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/ha-020641/id_rsa Username:docker}
	I1212 19:52:37.369593  152947 ssh_runner.go:195] Run: systemctl --version
	I1212 19:52:37.376479  152947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:52:37.393666  152947 kubeconfig.go:125] found "ha-020641" server: "https://192.168.39.254:8443"
	I1212 19:52:37.393729  152947 api_server.go:166] Checking apiserver status ...
	I1212 19:52:37.393782  152947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:52:37.413431  152947 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup
	W1212 19:52:37.425086  152947 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:52:37.425167  152947 ssh_runner.go:195] Run: ls
	I1212 19:52:37.430126  152947 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1212 19:52:37.436276  152947 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1212 19:52:37.436308  152947 status.go:463] ha-020641 apiserver status = Running (err=<nil>)
	I1212 19:52:37.436323  152947 status.go:176] ha-020641 status: &{Name:ha-020641 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:52:37.436347  152947 status.go:174] checking status of ha-020641-m02 ...
	I1212 19:52:37.438072  152947 status.go:371] ha-020641-m02 host status = "Stopped" (err=<nil>)
	I1212 19:52:37.438088  152947 status.go:384] host is not running, skipping remaining checks
	I1212 19:52:37.438093  152947 status.go:176] ha-020641-m02 status: &{Name:ha-020641-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:52:37.438122  152947 status.go:174] checking status of ha-020641-m03 ...
	I1212 19:52:37.439242  152947 status.go:371] ha-020641-m03 host status = "Running" (err=<nil>)
	I1212 19:52:37.439271  152947 host.go:66] Checking if "ha-020641-m03" exists ...
	I1212 19:52:37.441435  152947 main.go:143] libmachine: domain ha-020641-m03 has defined MAC address 52:54:00:5b:6f:f8 in network mk-ha-020641
	I1212 19:52:37.441751  152947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5b:6f:f8", ip: ""} in network mk-ha-020641: {Iface:virbr1 ExpiryTime:2025-12-12 20:49:10 +0000 UTC Type:0 Mac:52:54:00:5b:6f:f8 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-020641-m03 Clientid:01:52:54:00:5b:6f:f8}
	I1212 19:52:37.441770  152947 main.go:143] libmachine: domain ha-020641-m03 has defined IP address 192.168.39.144 and MAC address 52:54:00:5b:6f:f8 in network mk-ha-020641
	I1212 19:52:37.441890  152947 host.go:66] Checking if "ha-020641-m03" exists ...
	I1212 19:52:37.442059  152947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:52:37.443950  152947 main.go:143] libmachine: domain ha-020641-m03 has defined MAC address 52:54:00:5b:6f:f8 in network mk-ha-020641
	I1212 19:52:37.444295  152947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5b:6f:f8", ip: ""} in network mk-ha-020641: {Iface:virbr1 ExpiryTime:2025-12-12 20:49:10 +0000 UTC Type:0 Mac:52:54:00:5b:6f:f8 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:ha-020641-m03 Clientid:01:52:54:00:5b:6f:f8}
	I1212 19:52:37.444317  152947 main.go:143] libmachine: domain ha-020641-m03 has defined IP address 192.168.39.144 and MAC address 52:54:00:5b:6f:f8 in network mk-ha-020641
	I1212 19:52:37.444438  152947 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/ha-020641-m03/id_rsa Username:docker}
	I1212 19:52:37.528613  152947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:52:37.544652  152947 kubeconfig.go:125] found "ha-020641" server: "https://192.168.39.254:8443"
	I1212 19:52:37.544681  152947 api_server.go:166] Checking apiserver status ...
	I1212 19:52:37.544715  152947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 19:52:37.565398  152947 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1843/cgroup
	W1212 19:52:37.576023  152947 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1843/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 19:52:37.576094  152947 ssh_runner.go:195] Run: ls
	I1212 19:52:37.581084  152947 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1212 19:52:37.585952  152947 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1212 19:52:37.585978  152947 status.go:463] ha-020641-m03 apiserver status = Running (err=<nil>)
	I1212 19:52:37.585989  152947 status.go:176] ha-020641-m03 status: &{Name:ha-020641-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 19:52:37.586011  152947 status.go:174] checking status of ha-020641-m04 ...
	I1212 19:52:37.587911  152947 status.go:371] ha-020641-m04 host status = "Running" (err=<nil>)
	I1212 19:52:37.587945  152947 host.go:66] Checking if "ha-020641-m04" exists ...
	I1212 19:52:37.590616  152947 main.go:143] libmachine: domain ha-020641-m04 has defined MAC address 52:54:00:04:93:2e in network mk-ha-020641
	I1212 19:52:37.590992  152947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:93:2e", ip: ""} in network mk-ha-020641: {Iface:virbr1 ExpiryTime:2025-12-12 20:50:39 +0000 UTC Type:0 Mac:52:54:00:04:93:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-020641-m04 Clientid:01:52:54:00:04:93:2e}
	I1212 19:52:37.591012  152947 main.go:143] libmachine: domain ha-020641-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:04:93:2e in network mk-ha-020641
	I1212 19:52:37.591144  152947 host.go:66] Checking if "ha-020641-m04" exists ...
	I1212 19:52:37.591321  152947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 19:52:37.593601  152947 main.go:143] libmachine: domain ha-020641-m04 has defined MAC address 52:54:00:04:93:2e in network mk-ha-020641
	I1212 19:52:37.594100  152947 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:93:2e", ip: ""} in network mk-ha-020641: {Iface:virbr1 ExpiryTime:2025-12-12 20:50:39 +0000 UTC Type:0 Mac:52:54:00:04:93:2e Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-020641-m04 Clientid:01:52:54:00:04:93:2e}
	I1212 19:52:37.594172  152947 main.go:143] libmachine: domain ha-020641-m04 has defined IP address 192.168.39.203 and MAC address 52:54:00:04:93:2e in network mk-ha-020641
	I1212 19:52:37.594334  152947 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/ha-020641-m04/id_rsa Username:docker}
	I1212 19:52:37.674193  152947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 19:52:37.689283  152947 status.go:176] ha-020641-m04 status: &{Name:ha-020641-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (76.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 node start m02 --alsologtostderr -v 5
E1212 19:52:50.294260  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 node start m02 --alsologtostderr -v 5: (34.040363607s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 stop --alsologtostderr -v 5
E1212 19:53:36.480192  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:53:48.712293  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:56:04.849176  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:56:32.554470  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:57:13.399895  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 19:57:22.584979  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 stop --alsologtostderr -v 5: (4m25.304762531s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 start --wait true --alsologtostderr -v 5: (1m53.297385711s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 node delete m03 --alsologtostderr -v 5: (17.469561709s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (259.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 stop --alsologtostderr -v 5
E1212 20:01:04.848776  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:02:13.399525  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:02:22.590785  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:03:45.656270  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 stop --alsologtostderr -v 5: (4m18.964760573s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5: exit status 7 (68.394302ms)

                                                
                                                
-- stdout --
	ha-020641
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-020641-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-020641-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:04:10.184144  156354 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:04:10.184404  156354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:04:10.184412  156354 out.go:374] Setting ErrFile to fd 2...
	I1212 20:04:10.184425  156354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:04:10.184648  156354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:04:10.184818  156354 out.go:368] Setting JSON to false
	I1212 20:04:10.184845  156354 mustload.go:66] Loading cluster: ha-020641
	I1212 20:04:10.184904  156354 notify.go:221] Checking for updates...
	I1212 20:04:10.185219  156354 config.go:182] Loaded profile config "ha-020641": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:04:10.185234  156354 status.go:174] checking status of ha-020641 ...
	I1212 20:04:10.188081  156354 status.go:371] ha-020641 host status = "Stopped" (err=<nil>)
	I1212 20:04:10.188152  156354 status.go:384] host is not running, skipping remaining checks
	I1212 20:04:10.188158  156354 status.go:176] ha-020641 status: &{Name:ha-020641 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:04:10.188182  156354 status.go:174] checking status of ha-020641-m02 ...
	I1212 20:04:10.189372  156354 status.go:371] ha-020641-m02 host status = "Stopped" (err=<nil>)
	I1212 20:04:10.189384  156354 status.go:384] host is not running, skipping remaining checks
	I1212 20:04:10.189389  156354 status.go:176] ha-020641-m02 status: &{Name:ha-020641-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:04:10.189401  156354 status.go:174] checking status of ha-020641-m04 ...
	I1212 20:04:10.190630  156354 status.go:371] ha-020641-m04 host status = "Stopped" (err=<nil>)
	I1212 20:04:10.190644  156354 status.go:384] host is not running, skipping remaining checks
	I1212 20:04:10.190649  156354 status.go:176] ha-020641-m04 status: &{Name:ha-020641-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (259.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (90.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m30.171637621s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (90.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 node add --control-plane --alsologtostderr -v 5
E1212 20:06:04.849227  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-020641 node add --control-plane --alsologtostderr -v 5: (1m9.548955711s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-020641 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-396616 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1212 20:07:13.399724  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:07:22.584959  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:07:27.918286  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-396616 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.473615705s)
--- PASS: TestJSONOutput/start/Command (76.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-396616 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-396616 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-396616 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-396616 --output=json --user=testUser: (6.796694348s)
--- PASS: TestJSONOutput/stop/Command (6.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-763871 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-763871 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.108792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"46c5011d-5aa1-4c60-87e2-d52eae3d89d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-763871] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"efe03192-48b9-419f-8886-132084402379","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22112"}}
	{"specversion":"1.0","id":"f62cf028-a3ce-46d1-a83b-be1f1705efde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"866aeb50-877c-4dad-afad-474d46105cd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig"}}
	{"specversion":"1.0","id":"82a51f4b-1206-4e57-8667-428b2d816635","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube"}}
	{"specversion":"1.0","id":"506161ab-66e8-470d-b749-cdd03ae6e41d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3f2c1a7a-38a6-421c-bea7-4548db42dc1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b0033633-fec4-4742-85ec-d987fb86796e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-763871" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-763871
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (81.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-190914 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-190914 --driver=kvm2  --container-runtime=crio: (40.462549331s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-193054 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-193054 --driver=kvm2  --container-runtime=crio: (38.349146409s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-190914
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-193054
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-193054" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-193054
helpers_test.go:176: Cleaning up "first-190914" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-190914
--- PASS: TestMinikubeProfile (81.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-208941 --memory=3072 --mount-string /tmp/TestMountStartserial1764173107/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-208941 --memory=3072 --mount-string /tmp/TestMountStartserial1764173107/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.874383915s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-208941 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-208941 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-231054 --memory=3072 --mount-string /tmp/TestMountStartserial1764173107/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1212 20:10:16.482674  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-231054 --memory=3072 --mount-string /tmp/TestMountStartserial1764173107/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.11968753s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-231054 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-231054 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-208941 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-231054 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-231054 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-231054
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-231054: (1.256971453s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-231054
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-231054: (17.796913106s)
--- PASS: TestMountStart/serial/RestartStopped (18.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-231054 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-231054 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943484 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 20:11:04.848605  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:12:13.399820  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-943484 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m35.8579277s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- rollout status deployment/busybox
E1212 20:12:22.585003  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-943484 -- rollout status deployment/busybox: (4.119491357s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-tgzzg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-v5hdj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-tgzzg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-v5hdj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-tgzzg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-v5hdj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-tgzzg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-tgzzg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-v5hdj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-943484 -- exec busybox-7b57f96db7-v5hdj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-943484 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-943484 -v=5 --alsologtostderr: (43.918000971s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-943484 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp testdata/cp-test.txt multinode-943484:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp multinode-943484:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile337036029/001/cp-test_multinode-943484.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp multinode-943484:/home/docker/cp-test.txt multinode-943484-m02:/home/docker/cp-test_multinode-943484_multinode-943484-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m02 "sudo cat /home/docker/cp-test_multinode-943484_multinode-943484-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp multinode-943484:/home/docker/cp-test.txt multinode-943484-m03:/home/docker/cp-test_multinode-943484_multinode-943484-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m03 "sudo cat /home/docker/cp-test_multinode-943484_multinode-943484-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp testdata/cp-test.txt multinode-943484-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp multinode-943484-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile337036029/001/cp-test_multinode-943484-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp multinode-943484-m02:/home/docker/cp-test.txt multinode-943484:/home/docker/cp-test_multinode-943484-m02_multinode-943484.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484 "sudo cat /home/docker/cp-test_multinode-943484-m02_multinode-943484.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp multinode-943484-m02:/home/docker/cp-test.txt multinode-943484-m03:/home/docker/cp-test_multinode-943484-m02_multinode-943484-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m03 "sudo cat /home/docker/cp-test_multinode-943484-m02_multinode-943484-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp testdata/cp-test.txt multinode-943484-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp multinode-943484-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile337036029/001/cp-test_multinode-943484-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp multinode-943484-m03:/home/docker/cp-test.txt multinode-943484:/home/docker/cp-test_multinode-943484-m03_multinode-943484.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484 "sudo cat /home/docker/cp-test_multinode-943484-m03_multinode-943484.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 cp multinode-943484-m03:/home/docker/cp-test.txt multinode-943484-m02:/home/docker/cp-test_multinode-943484-m03_multinode-943484-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 ssh -n multinode-943484-m02 "sudo cat /home/docker/cp-test_multinode-943484-m03_multinode-943484-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-943484 node stop m03: (1.516672402s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-943484 status: exit status 7 (318.471304ms)

                                                
                                                
-- stdout --
	multinode-943484
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-943484-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-943484-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-943484 status --alsologtostderr: exit status 7 (323.658682ms)

                                                
                                                
-- stdout --
	multinode-943484
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-943484-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-943484-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:13:21.867172  161840 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:13:21.867290  161840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:21.867297  161840 out.go:374] Setting ErrFile to fd 2...
	I1212 20:13:21.867302  161840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:21.867527  161840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:13:21.867707  161840 out.go:368] Setting JSON to false
	I1212 20:13:21.867733  161840 mustload.go:66] Loading cluster: multinode-943484
	I1212 20:13:21.867794  161840 notify.go:221] Checking for updates...
	I1212 20:13:21.868265  161840 config.go:182] Loaded profile config "multinode-943484": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:21.868287  161840 status.go:174] checking status of multinode-943484 ...
	I1212 20:13:21.870719  161840 status.go:371] multinode-943484 host status = "Running" (err=<nil>)
	I1212 20:13:21.870738  161840 host.go:66] Checking if "multinode-943484" exists ...
	I1212 20:13:21.873287  161840 main.go:143] libmachine: domain multinode-943484 has defined MAC address 52:54:00:cc:56:30 in network mk-multinode-943484
	I1212 20:13:21.873699  161840 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cc:56:30", ip: ""} in network mk-multinode-943484: {Iface:virbr1 ExpiryTime:2025-12-12 21:11:00 +0000 UTC Type:0 Mac:52:54:00:cc:56:30 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-943484 Clientid:01:52:54:00:cc:56:30}
	I1212 20:13:21.873723  161840 main.go:143] libmachine: domain multinode-943484 has defined IP address 192.168.39.174 and MAC address 52:54:00:cc:56:30 in network mk-multinode-943484
	I1212 20:13:21.873865  161840 host.go:66] Checking if "multinode-943484" exists ...
	I1212 20:13:21.874062  161840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:13:21.876265  161840 main.go:143] libmachine: domain multinode-943484 has defined MAC address 52:54:00:cc:56:30 in network mk-multinode-943484
	I1212 20:13:21.876621  161840 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cc:56:30", ip: ""} in network mk-multinode-943484: {Iface:virbr1 ExpiryTime:2025-12-12 21:11:00 +0000 UTC Type:0 Mac:52:54:00:cc:56:30 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-943484 Clientid:01:52:54:00:cc:56:30}
	I1212 20:13:21.876652  161840 main.go:143] libmachine: domain multinode-943484 has defined IP address 192.168.39.174 and MAC address 52:54:00:cc:56:30 in network mk-multinode-943484
	I1212 20:13:21.876783  161840 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/multinode-943484/id_rsa Username:docker}
	I1212 20:13:21.958979  161840 ssh_runner.go:195] Run: systemctl --version
	I1212 20:13:21.965916  161840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:13:21.982138  161840 kubeconfig.go:125] found "multinode-943484" server: "https://192.168.39.174:8443"
	I1212 20:13:21.982189  161840 api_server.go:166] Checking apiserver status ...
	I1212 20:13:21.982243  161840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:13:22.000480  161840 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1356/cgroup
	W1212 20:13:22.011710  161840 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1356/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:13:22.011772  161840 ssh_runner.go:195] Run: ls
	I1212 20:13:22.016810  161840 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I1212 20:13:22.021334  161840 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I1212 20:13:22.021364  161840 status.go:463] multinode-943484 apiserver status = Running (err=<nil>)
	I1212 20:13:22.021376  161840 status.go:176] multinode-943484 status: &{Name:multinode-943484 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:13:22.021417  161840 status.go:174] checking status of multinode-943484-m02 ...
	I1212 20:13:22.023101  161840 status.go:371] multinode-943484-m02 host status = "Running" (err=<nil>)
	I1212 20:13:22.023132  161840 host.go:66] Checking if "multinode-943484-m02" exists ...
	I1212 20:13:22.025786  161840 main.go:143] libmachine: domain multinode-943484-m02 has defined MAC address 52:54:00:e3:85:5a in network mk-multinode-943484
	I1212 20:13:22.026248  161840 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e3:85:5a", ip: ""} in network mk-multinode-943484: {Iface:virbr1 ExpiryTime:2025-12-12 21:11:53 +0000 UTC Type:0 Mac:52:54:00:e3:85:5a Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-943484-m02 Clientid:01:52:54:00:e3:85:5a}
	I1212 20:13:22.026272  161840 main.go:143] libmachine: domain multinode-943484-m02 has defined IP address 192.168.39.212 and MAC address 52:54:00:e3:85:5a in network mk-multinode-943484
	I1212 20:13:22.026407  161840 host.go:66] Checking if "multinode-943484-m02" exists ...
	I1212 20:13:22.026647  161840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:13:22.028950  161840 main.go:143] libmachine: domain multinode-943484-m02 has defined MAC address 52:54:00:e3:85:5a in network mk-multinode-943484
	I1212 20:13:22.029343  161840 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e3:85:5a", ip: ""} in network mk-multinode-943484: {Iface:virbr1 ExpiryTime:2025-12-12 21:11:53 +0000 UTC Type:0 Mac:52:54:00:e3:85:5a Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-943484-m02 Clientid:01:52:54:00:e3:85:5a}
	I1212 20:13:22.029375  161840 main.go:143] libmachine: domain multinode-943484-m02 has defined IP address 192.168.39.212 and MAC address 52:54:00:e3:85:5a in network mk-multinode-943484
	I1212 20:13:22.029510  161840 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22112-135957/.minikube/machines/multinode-943484-m02/id_rsa Username:docker}
	I1212 20:13:22.109464  161840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:13:22.125673  161840 status.go:176] multinode-943484-m02 status: &{Name:multinode-943484-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:13:22.125713  161840 status.go:174] checking status of multinode-943484-m03 ...
	I1212 20:13:22.127633  161840 status.go:371] multinode-943484-m03 host status = "Stopped" (err=<nil>)
	I1212 20:13:22.127659  161840 status.go:384] host is not running, skipping remaining checks
	I1212 20:13:22.127667  161840 status.go:176] multinode-943484-m03 status: &{Name:multinode-943484-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-943484 node start m03 -v=5 --alsologtostderr: (35.924236481s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (280.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-943484
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-943484
E1212 20:16:04.848867  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-943484: (2m41.328124316s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943484 --wait=true -v=5 --alsologtostderr
E1212 20:17:13.400427  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:22.586549  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-943484 --wait=true -v=5 --alsologtostderr: (1m59.449773941s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-943484
--- PASS: TestMultiNode/serial/RestartKeepsNodes (280.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-943484 node delete m03: (2.088790146s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (168.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 stop
E1212 20:20:25.658430  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:21:04.848234  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-943484 stop: (2m48.626175862s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-943484 status: exit status 7 (66.798212ms)

                                                
                                                
-- stdout --
	multinode-943484
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-943484-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-943484 status --alsologtostderr: exit status 7 (65.487804ms)

                                                
                                                
-- stdout --
	multinode-943484
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-943484-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:21:30.721601  164558 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:21:30.721859  164558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:21:30.721867  164558 out.go:374] Setting ErrFile to fd 2...
	I1212 20:21:30.721872  164558 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:21:30.722069  164558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:21:30.722260  164558 out.go:368] Setting JSON to false
	I1212 20:21:30.722288  164558 mustload.go:66] Loading cluster: multinode-943484
	I1212 20:21:30.722371  164558 notify.go:221] Checking for updates...
	I1212 20:21:30.722696  164558 config.go:182] Loaded profile config "multinode-943484": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:21:30.722714  164558 status.go:174] checking status of multinode-943484 ...
	I1212 20:21:30.724946  164558 status.go:371] multinode-943484 host status = "Stopped" (err=<nil>)
	I1212 20:21:30.724961  164558 status.go:384] host is not running, skipping remaining checks
	I1212 20:21:30.724966  164558 status.go:176] multinode-943484 status: &{Name:multinode-943484 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:21:30.724982  164558 status.go:174] checking status of multinode-943484-m02 ...
	I1212 20:21:30.726058  164558 status.go:371] multinode-943484-m02 host status = "Stopped" (err=<nil>)
	I1212 20:21:30.726070  164558 status.go:384] host is not running, skipping remaining checks
	I1212 20:21:30.726075  164558 status.go:176] multinode-943484-m02 status: &{Name:multinode-943484-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (168.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943484 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 20:22:13.399557  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:22:22.587329  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-943484 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m21.833425205s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-943484 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-943484
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943484-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-943484-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.187908ms)

                                                
                                                
-- stdout --
	* [multinode-943484-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-943484-m02' is duplicated with machine name 'multinode-943484-m02' in profile 'multinode-943484'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-943484-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-943484-m03 --driver=kvm2  --container-runtime=crio: (37.64784289s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-943484
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-943484: exit status 80 (201.718392ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-943484 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-943484-m03 already exists in multinode-943484-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-943484-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.82s)

                                                
                                    
x
+
TestScheduledStopUnix (106.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-155981 --memory=3072 --driver=kvm2  --container-runtime=crio
E1212 20:26:04.848314  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-155981 --memory=3072 --driver=kvm2  --container-runtime=crio: (34.659532829s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-155981 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 20:26:33.044005  166829 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:26:33.044122  166829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:26:33.044133  166829 out.go:374] Setting ErrFile to fd 2...
	I1212 20:26:33.044140  166829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:26:33.044365  166829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:26:33.044603  166829 out.go:368] Setting JSON to false
	I1212 20:26:33.044682  166829 mustload.go:66] Loading cluster: scheduled-stop-155981
	I1212 20:26:33.044980  166829 config.go:182] Loaded profile config "scheduled-stop-155981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:26:33.045047  166829 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/config.json ...
	I1212 20:26:33.045235  166829 mustload.go:66] Loading cluster: scheduled-stop-155981
	I1212 20:26:33.045331  166829 config.go:182] Loaded profile config "scheduled-stop-155981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-155981 -n scheduled-stop-155981
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-155981 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 20:26:33.330206  166874 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:26:33.330492  166874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:26:33.330502  166874 out.go:374] Setting ErrFile to fd 2...
	I1212 20:26:33.330506  166874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:26:33.330748  166874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:26:33.330985  166874 out.go:368] Setting JSON to false
	I1212 20:26:33.331202  166874 daemonize_unix.go:73] killing process 166863 as it is an old scheduled stop
	I1212 20:26:33.331307  166874 mustload.go:66] Loading cluster: scheduled-stop-155981
	I1212 20:26:33.331737  166874 config.go:182] Loaded profile config "scheduled-stop-155981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:26:33.331834  166874 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/config.json ...
	I1212 20:26:33.332048  166874 mustload.go:66] Loading cluster: scheduled-stop-155981
	I1212 20:26:33.332194  166874 config.go:182] Loaded profile config "scheduled-stop-155981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1212 20:26:33.339279  139995 retry.go:31] will retry after 64.334µs: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.340439  139995 retry.go:31] will retry after 101.984µs: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.341595  139995 retry.go:31] will retry after 238.69µs: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.342763  139995 retry.go:31] will retry after 195.318µs: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.343897  139995 retry.go:31] will retry after 686.662µs: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.345007  139995 retry.go:31] will retry after 539.019µs: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.346129  139995 retry.go:31] will retry after 791.187µs: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.347274  139995 retry.go:31] will retry after 2.172013ms: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.350479  139995 retry.go:31] will retry after 2.929803ms: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.353650  139995 retry.go:31] will retry after 4.761558ms: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.358896  139995 retry.go:31] will retry after 7.673599ms: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.367120  139995 retry.go:31] will retry after 4.431266ms: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.372327  139995 retry.go:31] will retry after 19.203796ms: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.392521  139995 retry.go:31] will retry after 14.373601ms: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
I1212 20:26:33.407888  139995 retry.go:31] will retry after 41.882929ms: open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-155981 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1212 20:26:56.484876  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-155981 -n scheduled-stop-155981
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-155981
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-155981 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 20:26:59.001267  167022 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:26:59.001563  167022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:26:59.001577  167022 out.go:374] Setting ErrFile to fd 2...
	I1212 20:26:59.001584  167022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:26:59.001818  167022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:26:59.002099  167022 out.go:368] Setting JSON to false
	I1212 20:26:59.002188  167022 mustload.go:66] Loading cluster: scheduled-stop-155981
	I1212 20:26:59.002489  167022 config.go:182] Loaded profile config "scheduled-stop-155981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:26:59.002543  167022 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/scheduled-stop-155981/config.json ...
	I1212 20:26:59.002728  167022 mustload.go:66] Loading cluster: scheduled-stop-155981
	I1212 20:26:59.002816  167022 config.go:182] Loaded profile config "scheduled-stop-155981": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
E1212 20:27:13.399521  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1212 20:27:22.592407  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-155981
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-155981: exit status 7 (62.899072ms)

                                                
                                                
-- stdout --
	scheduled-stop-155981
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-155981 -n scheduled-stop-155981
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-155981 -n scheduled-stop-155981: exit status 7 (60.816462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-155981" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-155981
--- PASS: TestScheduledStopUnix (106.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (118.15s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.395425477 start -p running-upgrade-553730 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.395425477 start -p running-upgrade-553730 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (43.300890588s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-553730 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-553730 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.22260203s)
helpers_test.go:176: Cleaning up "running-upgrade-553730" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-553730
--- PASS: TestRunningBinaryUpgrade (118.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (147.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-404101 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-404101 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.483754712s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-404101
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-404101: (1.853512875s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-404101 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-404101 status --format={{.Host}}: exit status 7 (73.729178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-404101 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-404101 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.074802662s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-404101 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-404101 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-404101 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.625332ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-404101] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-404101
	    minikube start -p kubernetes-upgrade-404101 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4041012 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-404101 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-404101 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-404101 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.784404633s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-404101" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-404101
--- PASS: TestKubernetesUpgrade (147.32s)

                                                
                                    
x
+
TestISOImage/Setup (35.98s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-095861 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-095861 --no-kubernetes --driver=kvm2  --container-runtime=crio: (35.977006434s)
--- PASS: TestISOImage/Setup (35.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-129221 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-129221 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (95.653986ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-129221] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (91.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-129221 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-129221 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m30.84226214s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-129221 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (91.08s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (157.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2456813222 start -p stopped-upgrade-529335 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2456813222 start -p stopped-upgrade-529335 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m35.851819872s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2456813222 -p stopped-upgrade-529335 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2456813222 -p stopped-upgrade-529335 stop: (1.535554555s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-529335 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-529335 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.684609399s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (157.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (36.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-129221 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-129221 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (35.434636541s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-129221 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-129221 status -o json: exit status 2 (194.65551ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-129221","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-129221
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (36.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (42.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-129221 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-129221 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.057318389s)
--- PASS: TestNoKubernetes/serial/Start (42.06s)

                                                
                                    
x
+
TestPause/serial/Start (113.1s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-455927 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-455927 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m53.099592529s)
--- PASS: TestPause/serial/Start (113.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22112-135957/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-129221 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-129221 "sudo systemctl is-active --quiet service kubelet": exit status 1 (189.843579ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (2.571825859s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.581356242s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-129221
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-129221: (1.232209206s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (45.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-129221 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-129221 --driver=kvm2  --container-runtime=crio: (45.762262123s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (45.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-529335
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-529335: (1.025057061s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-129221 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-129221 "sudo systemctl is-active --quiet service kubelet": exit status 1 (171.022905ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-873824 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-873824 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (125.010965ms)

                                                
                                                
-- stdout --
	* [false-873824] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:32:08.622341  171888 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:32:08.622470  171888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:08.622477  171888 out.go:374] Setting ErrFile to fd 2...
	I1212 20:32:08.622483  171888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:32:08.622689  171888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-135957/.minikube/bin
	I1212 20:32:08.623215  171888 out.go:368] Setting JSON to false
	I1212 20:32:08.624125  171888 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8069,"bootTime":1765563460,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 20:32:08.624181  171888 start.go:143] virtualization: kvm guest
	I1212 20:32:08.625991  171888 out.go:179] * [false-873824] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1212 20:32:08.627183  171888 notify.go:221] Checking for updates...
	I1212 20:32:08.627193  171888 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:32:08.628446  171888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:32:08.629601  171888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-135957/kubeconfig
	I1212 20:32:08.630790  171888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-135957/.minikube
	I1212 20:32:08.631970  171888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 20:32:08.633153  171888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:32:08.634855  171888 config.go:182] Loaded profile config "cert-expiration-391329": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:08.634998  171888 config.go:182] Loaded profile config "force-systemd-env-370330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:08.635127  171888 config.go:182] Loaded profile config "guest-095861": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1212 20:32:08.635346  171888 config.go:182] Loaded profile config "pause-455927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:32:08.635465  171888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:32:08.668059  171888 out.go:179] * Using the kvm2 driver based on user configuration
	I1212 20:32:08.669136  171888 start.go:309] selected driver: kvm2
	I1212 20:32:08.669150  171888 start.go:927] validating driver "kvm2" against <nil>
	I1212 20:32:08.669161  171888 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:32:08.670878  171888 out.go:203] 
	W1212 20:32:08.672081  171888 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 20:32:08.673800  171888 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-873824 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-873824" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:31:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.217:8443
name: pause-455927
contexts:
- context:
cluster: pause-455927
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:31:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-455927
name: pause-455927
current-context: ""
kind: Config
users:
- name: pause-455927
user:
client-certificate: /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/client.crt
client-key: /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-873824

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873824"

                                                
                                                
----------------------- debugLogs end: false-873824 [took: 3.873888369s] --------------------------------
helpers_test.go:176: Cleaning up "false-873824" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-873824
--- PASS: TestNetworkPlugins/group/false (4.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (115.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-202994 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-202994 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m55.983601494s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (115.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (105.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-095764 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-095764 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m45.283447819s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (105.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (105.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-222571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-222571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m45.651660025s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (105.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-202994 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [d3768a40-408d-49be-8040-d1b78ef17066] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [d3768a40-408d-49be-8040-d1b78ef17066] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003268057s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-202994 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-202994 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-202994 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (73.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-202994 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-202994 --alsologtostderr -v=3: (1m13.092804644s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (73.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-095764 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [9b4218f8-d952-4544-9ac0-1c1dd99aec01] Pending
helpers_test.go:353: "busybox" [9b4218f8-d952-4544-9ac0-1c1dd99aec01] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [9b4218f8-d952-4544-9ac0-1c1dd99aec01] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003904344s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-095764 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-222571 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b648fb66-daf0-497a-bae4-f3a7c94ceb59] Pending
helpers_test.go:353: "busybox" [b648fb66-daf0-497a-bae4-f3a7c94ceb59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b648fb66-daf0-497a-bae4-f3a7c94ceb59] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004570758s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-222571 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-095764 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-095764 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (82.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-095764 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-095764 --alsologtostderr -v=3: (1m22.994450426s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (82.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-222571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-222571 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (90.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-222571 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-222571 --alsologtostderr -v=3: (1m30.238158785s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (90.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-202994 -n old-k8s-version-202994
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-202994 -n old-k8s-version-202994: exit status 7 (90.293308ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-202994 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-202994 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1212 20:36:04.848905  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-202994 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (45.726718183s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-202994 -n old-k8s-version-202994
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095764 -n no-preload-095764
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095764 -n no-preload-095764: exit status 7 (66.327617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-095764 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-095764 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-095764 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (55.088487358s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095764 -n no-preload-095764
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-pbzh9" [dfc30c68-61df-41c1-971f-119feec13c12] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-pbzh9" [dfc30c68-61df-41c1-971f-119feec13c12] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.004755397s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-222571 -n embed-certs-222571
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-222571 -n embed-certs-222571: exit status 7 (69.9526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-222571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-222571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-222571 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (47.412225376s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-222571 -n embed-certs-222571
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-pbzh9" [dfc30c68-61df-41c1-971f-119feec13c12] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006713714s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-202994 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-202994 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-202994 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-202994 -n old-k8s-version-202994
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-202994 -n old-k8s-version-202994: exit status 2 (246.150834ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-202994 -n old-k8s-version-202994
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-202994 -n old-k8s-version-202994: exit status 2 (246.137934ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-202994 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-202994 -n old-k8s-version-202994
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-202994 -n old-k8s-version-202994
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-415383 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1212 20:37:05.659940  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:37:13.400066  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-415383 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (48.696863268s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lwrc6" [c90eb8f5-ca3a-492a-b4c9-49986f691d37] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lwrc6" [c90eb8f5-ca3a-492a-b4c9-49986f691d37] Running
E1212 20:37:22.584984  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-202590/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.00519522s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qwfpx" [8cbdc9d7-898f-46fc-b96f-f575dbfc6daa] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qwfpx" [8cbdc9d7-898f-46fc-b96f-f575dbfc6daa] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00484008s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-lwrc6" [c90eb8f5-ca3a-492a-b4c9-49986f691d37] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006133145s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-095764 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095764 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-095764 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095764 -n no-preload-095764
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095764 -n no-preload-095764: exit status 2 (262.23748ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-095764 -n no-preload-095764
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-095764 -n no-preload-095764: exit status 2 (267.94482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-095764 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095764 -n no-preload-095764
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-095764 -n no-preload-095764
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qwfpx" [8cbdc9d7-898f-46fc-b96f-f575dbfc6daa] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003968433s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-222571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-652961 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-652961 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m22.663099802s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-222571 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-222571 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-222571 -n embed-certs-222571
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-222571 -n embed-certs-222571: exit status 2 (229.943505ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-222571 -n embed-certs-222571
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-222571 -n embed-certs-222571: exit status 2 (237.623305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-222571 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-222571 -n embed-certs-222571
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-222571 -n embed-certs-222571
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m33.261044192s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m28.544979376s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-415383 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-415383 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-415383 --alsologtostderr -v=3: (6.945095127s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-415383 -n newest-cni-415383
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-415383 -n newest-cni-415383: exit status 7 (60.029003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-415383 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (79.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-415383 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-415383 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m18.91498351s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-415383 -n newest-cni-415383
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (79.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-652961 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [f004c98c-dc0d-4598-a65e-29a5522e1d0b] Pending
helpers_test.go:353: "busybox" [f004c98c-dc0d-4598-a65e-29a5522e1d0b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [f004c98c-dc0d-4598-a65e-29a5522e1d0b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004611741s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-652961 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-652961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-652961 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-pcjqb" [4324ed7f-dd48-4275-a61f-8ba9a17c8490] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004682672s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-415383 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-415383 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-415383 -n newest-cni-415383
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-415383 -n newest-cni-415383: exit status 2 (228.002858ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-415383 -n newest-cni-415383
I1212 20:39:09.927606  139995 config.go:182] Loaded profile config "auto-873824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-415383 -n newest-cni-415383: exit status 2 (236.443905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-415383 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-415383 -n newest-cni-415383
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-415383 -n newest-cni-415383
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-652961 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-652961 --alsologtostderr -v=3: (1m23.488765099s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-873824 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-873824 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tvb27" [911ace28-7d75-47e5-95e6-f7b404b360bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tvb27" [911ace28-7d75-47e5-95e6-f7b404b360bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004054052s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m27.89692746s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-873824 "pgrep -a kubelet"
I1212 20:39:14.331554  139995 config.go:182] Loaded profile config "kindnet-873824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-873824 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-l9jqt" [123ef6ac-05fa-49b2-b7a9-b4485f40e0e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-l9jqt" [123ef6ac-05fa-49b2-b7a9-b4485f40e0e7] Running
E1212 20:39:20.116039  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:20.122448  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:20.133816  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:20.155405  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:20.197093  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:20.278656  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:20.440237  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:20.761632  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004031197s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-873824 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1212 20:39:21.403803  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-873824 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m11.741587934s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1212 20:39:40.611279  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:44.829059  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:44.835527  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:44.846923  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:44.868334  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:44.909819  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:44.991342  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:45.153310  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:45.475244  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:46.117648  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:47.399260  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:49.961154  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:55.083419  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:40:01.093320  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:40:05.325289  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:40:25.806717  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m39.610023938s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652961 -n default-k8s-diff-port-652961
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652961 -n default-k8s-diff-port-652961: exit status 7 (74.332177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-652961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-652961 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-652961 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (48.948300117s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-652961 -n default-k8s-diff-port-652961
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-j8ncl" [35a35814-2dff-4d19-ba0c-4fb3feeb0946] Running
E1212 20:40:42.054928  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/old-k8s-version-202994/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004004906s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-873824 "pgrep -a kubelet"
I1212 20:40:46.686175  139995 config.go:182] Loaded profile config "calico-873824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-873824 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wvjm7" [006ec388-379d-4dd6-b797-0512413ae622] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wvjm7" [006ec388-379d-4dd6-b797-0512413ae622] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005333886s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-873824 "pgrep -a kubelet"
I1212 20:40:47.652043  139995 config.go:182] Loaded profile config "custom-flannel-873824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-873824 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wncjt" [81679b62-82fa-4a0e-a1a0-a5a57febfabd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 20:40:47.922520  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/functional-066499/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-wncjt" [81679b62-82fa-4a0e-a1a0-a5a57febfabd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00455286s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-873824 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-873824 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m13.612664458s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (105.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-873824 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m45.137339571s)
--- PASS: TestNetworkPlugins/group/bridge/Start (105.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-873824 "pgrep -a kubelet"
I1212 20:41:19.108409  139995 config.go:182] Loaded profile config "enable-default-cni-873824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-873824 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nnlw5" [53d54e33-5fe4-49b8-927f-1b6ce1a11687] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-nnlw5" [53d54e33-5fe4-49b8-927f-1b6ce1a11687] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003778696s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qgs8n" [178d3d05-43f3-4399-b27e-439a9d31122f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qgs8n" [178d3d05-43f3-4399-b27e-439a9d31122f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.003491607s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-qgs8n" [178d3d05-43f3-4399-b27e-439a9d31122f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005214791s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-652961 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-873824 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-652961 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-652961 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652961 -n default-k8s-diff-port-652961
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652961 -n default-k8s-diff-port-652961: exit status 2 (274.988717ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-652961 -n default-k8s-diff-port-652961
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-652961 -n default-k8s-diff-port-652961: exit status 2 (301.443829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-652961 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-652961 --alsologtostderr -v=1: (1.006684242s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-652961 -n default-k8s-diff-port-652961
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-652961 -n default-k8s-diff-port-652961
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.4s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   kicbase_version: v0.0.48-1765275396-22083
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 2e51b54b5cee5d454381ac23cfe3d8d395879671
iso_test.go:118:   iso_version: v1.37.0-1765505725-22112
--- PASS: TestISOImage/VersionJSON (0.40s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.33s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-095861 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-2ffrs" [a4a6e5a4-6e95-4be1-9853-5875cc8d4668] Running
E1212 20:42:28.690675  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/no-preload-095764/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00364137s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-873824 "pgrep -a kubelet"
I1212 20:42:33.970015  139995 config.go:182] Loaded profile config "flannel-873824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-873824 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-k4w7q" [637a491c-b1f9-4a54-9e18-341b463fe411] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-k4w7q" [637a491c-b1f9-4a54-9e18-341b463fe411] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004198579s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-873824 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-873824 "pgrep -a kubelet"
I1212 20:43:00.355732  139995 config.go:182] Loaded profile config "bridge-873824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-873824 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-f47sb" [f3490b9f-dd46-45c5-ac6e-d6e6fdbabeb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-f47sb" [f3490b9f-dd46-45c5-ac6e-d6e6fdbabeb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003771744s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-873824 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-873824 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.29
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
363 TestStartStop/group/disable-driver-mounts 0.18
393 TestNetworkPlugins/group/kubenet 4.98
402 TestNetworkPlugins/group/cilium 4.72
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-347541 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-674293" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-674293
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-873824 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-873824" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:31:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.217:8443
name: pause-455927
contexts:
- context:
cluster: pause-455927
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:31:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-455927
name: pause-455927
current-context: ""
kind: Config
users:
- name: pause-455927
user:
client-certificate: /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/client.crt
client-key: /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-873824

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873824"

                                                
                                                
----------------------- debugLogs end: kubenet-873824 [took: 4.795595649s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-873824" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-873824
--- SKIP: TestNetworkPlugins/group/kubenet (4.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1212 20:32:13.400133  139995 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/addons-347541/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: cilium-873824 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-873824" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22112-135957/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:31:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.217:8443
name: pause-455927
contexts:
- context:
cluster: pause-455927
extensions:
- extension:
last-update: Fri, 12 Dec 2025 20:31:26 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-455927
name: pause-455927
current-context: ""
kind: Config
users:
- name: pause-455927
user:
client-certificate: /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/client.crt
client-key: /home/jenkins/minikube-integration/22112-135957/.minikube/profiles/pause-455927/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-873824

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-873824" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873824"

                                                
                                                
----------------------- debugLogs end: cilium-873824 [took: 4.553370659s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-873824" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-873824
--- SKIP: TestNetworkPlugins/group/cilium (4.72s)

                                                
                                    
Copied to clipboard