Test Report: KVM_Linux_crio 21932

                    
                      84a896b9ca11c6987b6528b1f6e82b411b2540e2:2025-11-24:42492
                    
                

Test fail (2/351)

Order failed test Duration
37 TestAddons/parallel/Ingress 157.41
244 TestPreload 162.45
x
+
TestAddons/parallel/Ingress (157.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-377447 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-377447 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-377447 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [be5a4fcf-d0b1-4b78-b885-5735b908730d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [be5a4fcf-d0b1-4b78-b885-5735b908730d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.010365092s
I1124 13:17:55.844029  136268 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-377447 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.259275924s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-377447 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-377447 -n addons-377447
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-377447 logs -n 25: (1.064153563s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-238831                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-238831 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ start   │ --download-only -p binary-mirror-955621 --alsologtostderr --binary-mirror http://127.0.0.1:44405 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-955621 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ delete  │ -p binary-mirror-955621                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-955621 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ addons  │ disable dashboard -p addons-377447                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ addons  │ enable dashboard -p addons-377447                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ start   │ -p addons-377447 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ enable headlamp -p addons-377447 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ ssh     │ addons-377447 ssh cat /opt/local-path-provisioner/pvc-db4c394a-69ff-46d3-ab48-9593d3fc2b9a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ ip      │ addons-377447 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-377447                                                                                                                                                                                                                                                                                                                                                                                         │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:17 UTC │
	│ addons  │ addons-377447 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │ 24 Nov 25 13:18 UTC │
	│ ssh     │ addons-377447 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-377447 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:18 UTC │ 24 Nov 25 13:18 UTC │
	│ addons  │ addons-377447 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:18 UTC │ 24 Nov 25 13:18 UTC │
	│ ip      │ addons-377447 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-377447        │ jenkins │ v1.37.0 │ 24 Nov 25 13:20 UTC │ 24 Nov 25 13:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:14:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:14:56.811866  136968 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:14:56.812097  136968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:56.812116  136968 out.go:374] Setting ErrFile to fd 2...
	I1124 13:14:56.812121  136968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:56.812301  136968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 13:14:56.812799  136968 out.go:368] Setting JSON to false
	I1124 13:14:56.813666  136968 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3420,"bootTime":1763986677,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:14:56.813723  136968 start.go:143] virtualization: kvm guest
	I1124 13:14:56.816293  136968 out.go:179] * [addons-377447] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:14:56.817410  136968 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:14:56.817410  136968 notify.go:221] Checking for updates...
	I1124 13:14:56.819911  136968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:14:56.821047  136968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 13:14:56.822282  136968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	I1124 13:14:56.823317  136968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:14:56.824375  136968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:14:56.825528  136968 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:14:56.856515  136968 out.go:179] * Using the kvm2 driver based on user configuration
	I1124 13:14:56.857538  136968 start.go:309] selected driver: kvm2
	I1124 13:14:56.857552  136968 start.go:927] validating driver "kvm2" against <nil>
	I1124 13:14:56.857563  136968 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:14:56.858303  136968 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:14:56.858575  136968 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:14:56.858603  136968 cni.go:84] Creating CNI manager for ""
	I1124 13:14:56.858648  136968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 13:14:56.858663  136968 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 13:14:56.858708  136968 start.go:353] cluster config:
	{Name:addons-377447 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-377447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1124 13:14:56.858799  136968 iso.go:125] acquiring lock: {Name:mk70c2563fd35b13c556749f7252ab4e6e575da1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:14:56.860185  136968 out.go:179] * Starting "addons-377447" primary control-plane node in "addons-377447" cluster
	I1124 13:14:56.861136  136968 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:56.861162  136968 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 13:14:56.861173  136968 cache.go:65] Caching tarball of preloaded images
	I1124 13:14:56.861253  136968 preload.go:238] Found /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 13:14:56.861264  136968 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:14:56.861594  136968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/config.json ...
	I1124 13:14:56.861613  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/config.json: {Name:mke7c1b97fa8e224b440c997f3df68338035e74c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:56.861748  136968 start.go:360] acquireMachinesLock for addons-377447: {Name:mk9fe90a150b6a232eb17397ca6aca3c1b63dcde Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 13:14:56.861791  136968 start.go:364] duration metric: took 30.816µs to acquireMachinesLock for "addons-377447"
	I1124 13:14:56.861808  136968 start.go:93] Provisioning new machine with config: &{Name:addons-377447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-377447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:14:56.861865  136968 start.go:125] createHost starting for "" (driver="kvm2")
	I1124 13:14:56.863188  136968 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1124 13:14:56.863327  136968 start.go:159] libmachine.API.Create for "addons-377447" (driver="kvm2")
	I1124 13:14:56.863353  136968 client.go:173] LocalClient.Create starting
	I1124 13:14:56.863434  136968 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem
	I1124 13:14:56.878873  136968 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/cert.pem
	I1124 13:14:56.956766  136968 main.go:143] libmachine: creating domain...
	I1124 13:14:56.956788  136968 main.go:143] libmachine: creating network...
	I1124 13:14:56.958158  136968 main.go:143] libmachine: found existing default network
	I1124 13:14:56.958395  136968 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1124 13:14:56.959514  136968 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001be66d0}
	I1124 13:14:56.959616  136968 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-377447</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1124 13:14:56.965292  136968 main.go:143] libmachine: creating private network mk-addons-377447 192.168.39.0/24...
	I1124 13:14:57.030886  136968 main.go:143] libmachine: private network mk-addons-377447 192.168.39.0/24 created
	I1124 13:14:57.031191  136968 main.go:143] libmachine: <network>
	  <name>mk-addons-377447</name>
	  <uuid>5848449a-9daa-4d36-9a55-1dd46b6e2aa9</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:39:73:64'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1124 13:14:57.031220  136968 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447 ...
	I1124 13:14:57.031241  136968 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21932-132228/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1124 13:14:57.031253  136968 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21932-132228/.minikube
	I1124 13:14:57.031343  136968 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21932-132228/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21932-132228/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1124 13:14:57.321261  136968 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa...
	I1124 13:14:57.351954  136968 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/addons-377447.rawdisk...
	I1124 13:14:57.351996  136968 main.go:143] libmachine: Writing magic tar header
	I1124 13:14:57.352046  136968 main.go:143] libmachine: Writing SSH key tar header
	I1124 13:14:57.352136  136968 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447 ...
	I1124 13:14:57.352196  136968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447
	I1124 13:14:57.352219  136968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447 (perms=drwx------)
	I1124 13:14:57.352231  136968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21932-132228/.minikube/machines
	I1124 13:14:57.352241  136968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21932-132228/.minikube/machines (perms=drwxr-xr-x)
	I1124 13:14:57.352251  136968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21932-132228/.minikube
	I1124 13:14:57.352263  136968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21932-132228/.minikube (perms=drwxr-xr-x)
	I1124 13:14:57.352280  136968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21932-132228
	I1124 13:14:57.352291  136968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21932-132228 (perms=drwxrwxr-x)
	I1124 13:14:57.352301  136968 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1124 13:14:57.352309  136968 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1124 13:14:57.352317  136968 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1124 13:14:57.352327  136968 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1124 13:14:57.352338  136968 main.go:143] libmachine: checking permissions on dir: /home
	I1124 13:14:57.352345  136968 main.go:143] libmachine: skipping /home - not owner
	I1124 13:14:57.352350  136968 main.go:143] libmachine: defining domain...
	I1124 13:14:57.353533  136968 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-377447</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/addons-377447.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-377447'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1124 13:14:57.360771  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:05:68:ac in network default
	I1124 13:14:57.361477  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:14:57.361498  136968 main.go:143] libmachine: starting domain...
	I1124 13:14:57.361504  136968 main.go:143] libmachine: ensuring networks are active...
	I1124 13:14:57.362395  136968 main.go:143] libmachine: Ensuring network default is active
	I1124 13:14:57.362823  136968 main.go:143] libmachine: Ensuring network mk-addons-377447 is active
	I1124 13:14:57.363594  136968 main.go:143] libmachine: getting domain XML...
	I1124 13:14:57.364731  136968 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-377447</name>
	  <uuid>e0710977-f0fa-49ba-9ad4-5fe1cc92849c</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/addons-377447.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:50:33:79'/>
	      <source network='mk-addons-377447'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:05:68:ac'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1124 13:14:58.658006  136968 main.go:143] libmachine: waiting for domain to start...
	I1124 13:14:58.659340  136968 main.go:143] libmachine: domain is now running
	I1124 13:14:58.659357  136968 main.go:143] libmachine: waiting for IP...
	I1124 13:14:58.660223  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:14:58.660722  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:14:58.660738  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:14:58.660999  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:14:58.661044  136968 retry.go:31] will retry after 239.116265ms: waiting for domain to come up
	I1124 13:14:58.902467  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:14:58.902995  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:14:58.903011  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:14:58.903306  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:14:58.903347  136968 retry.go:31] will retry after 341.150663ms: waiting for domain to come up
	I1124 13:14:59.245816  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:14:59.246366  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:14:59.246383  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:14:59.246655  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:14:59.246689  136968 retry.go:31] will retry after 307.25984ms: waiting for domain to come up
	I1124 13:14:59.555156  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:14:59.555786  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:14:59.555802  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:14:59.556091  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:14:59.556145  136968 retry.go:31] will retry after 556.761935ms: waiting for domain to come up
	I1124 13:15:00.114958  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:00.115566  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:15:00.115583  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:15:00.115878  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:15:00.115916  136968 retry.go:31] will retry after 664.535741ms: waiting for domain to come up
	I1124 13:15:00.781657  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:00.782165  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:15:00.782179  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:15:00.782501  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:15:00.782535  136968 retry.go:31] will retry after 799.557497ms: waiting for domain to come up
	I1124 13:15:01.583557  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:01.584064  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:15:01.584083  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:15:01.584414  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:15:01.584458  136968 retry.go:31] will retry after 928.204412ms: waiting for domain to come up
	I1124 13:15:02.514768  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:02.515358  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:15:02.515388  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:15:02.515742  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:15:02.515785  136968 retry.go:31] will retry after 1.425700692s: waiting for domain to come up
	I1124 13:15:03.943345  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:03.943840  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:15:03.943854  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:15:03.944148  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:15:03.944183  136968 retry.go:31] will retry after 1.798124475s: waiting for domain to come up
	I1124 13:15:05.745331  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:05.745809  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:15:05.745826  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:15:05.746102  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:15:05.746148  136968 retry.go:31] will retry after 1.814439s: waiting for domain to come up
	I1124 13:15:07.562438  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:07.563070  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:15:07.563087  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:15:07.563446  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:15:07.563487  136968 retry.go:31] will retry after 1.901750477s: waiting for domain to come up
	I1124 13:15:09.467246  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:09.467814  136968 main.go:143] libmachine: no network interface addresses found for domain addons-377447 (source=lease)
	I1124 13:15:09.467832  136968 main.go:143] libmachine: trying to list again with source=arp
	I1124 13:15:09.468182  136968 main.go:143] libmachine: unable to find current IP address of domain addons-377447 in network mk-addons-377447 (interfaces detected: [])
	I1124 13:15:09.468223  136968 retry.go:31] will retry after 2.448340429s: waiting for domain to come up
	I1124 13:15:11.917985  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:11.918599  136968 main.go:143] libmachine: domain addons-377447 has current primary IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:11.918618  136968 main.go:143] libmachine: found domain IP: 192.168.39.2
	I1124 13:15:11.918625  136968 main.go:143] libmachine: reserving static IP address...
	I1124 13:15:11.919061  136968 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-377447", mac: "52:54:00:50:33:79", ip: "192.168.39.2"} in network mk-addons-377447
	I1124 13:15:12.104276  136968 main.go:143] libmachine: reserved static IP address 192.168.39.2 for domain addons-377447
	I1124 13:15:12.104297  136968 main.go:143] libmachine: waiting for SSH...
	I1124 13:15:12.104304  136968 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 13:15:12.107041  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.107535  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:33:79}
	I1124 13:15:12.107564  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.107822  136968 main.go:143] libmachine: Using SSH client type: native
	I1124 13:15:12.108066  136968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1124 13:15:12.108080  136968 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 13:15:12.215530  136968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:15:12.215888  136968 main.go:143] libmachine: domain creation complete
	I1124 13:15:12.217377  136968 machine.go:94] provisionDockerMachine start ...
	I1124 13:15:12.219410  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.219729  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:12.219746  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.219882  136968 main.go:143] libmachine: Using SSH client type: native
	I1124 13:15:12.220079  136968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1124 13:15:12.220089  136968 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:15:12.328735  136968 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 13:15:12.328766  136968 buildroot.go:166] provisioning hostname "addons-377447"
	I1124 13:15:12.331537  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.331988  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:12.332013  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.332240  136968 main.go:143] libmachine: Using SSH client type: native
	I1124 13:15:12.332493  136968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1124 13:15:12.332507  136968 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-377447 && echo "addons-377447" | sudo tee /etc/hostname
	I1124 13:15:12.455065  136968 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-377447
	
	I1124 13:15:12.458094  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.458593  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:12.458617  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.458804  136968 main.go:143] libmachine: Using SSH client type: native
	I1124 13:15:12.459041  136968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1124 13:15:12.459057  136968 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-377447' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-377447/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-377447' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:15:12.577518  136968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:15:12.577552  136968 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21932-132228/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-132228/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-132228/.minikube}
	I1124 13:15:12.577610  136968 buildroot.go:174] setting up certificates
	I1124 13:15:12.577632  136968 provision.go:84] configureAuth start
	I1124 13:15:12.580499  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.580895  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:12.580919  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.583244  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.583616  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:12.583639  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.583771  136968 provision.go:143] copyHostCerts
	I1124 13:15:12.583836  136968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-132228/.minikube/ca.pem (1078 bytes)
	I1124 13:15:12.583976  136968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-132228/.minikube/cert.pem (1123 bytes)
	I1124 13:15:12.584037  136968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-132228/.minikube/key.pem (1675 bytes)
	I1124 13:15:12.584086  136968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-132228/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca-key.pem org=jenkins.addons-377447 san=[127.0.0.1 192.168.39.2 addons-377447 localhost minikube]
	I1124 13:15:12.666631  136968 provision.go:177] copyRemoteCerts
	I1124 13:15:12.666692  136968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:15:12.669268  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.669596  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:12.669630  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.669752  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:12.754696  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:15:12.784499  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 13:15:12.813825  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:15:12.864822  136968 provision.go:87] duration metric: took 287.173478ms to configureAuth
	I1124 13:15:12.864859  136968 buildroot.go:189] setting minikube options for container-runtime
	I1124 13:15:12.865099  136968 config.go:182] Loaded profile config "addons-377447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:15:12.867991  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.868390  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:12.868428  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:12.868625  136968 main.go:143] libmachine: Using SSH client type: native
	I1124 13:15:12.868911  136968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1124 13:15:12.868934  136968 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:15:13.105551  136968 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:15:13.105585  136968 machine.go:97] duration metric: took 888.189393ms to provisionDockerMachine
	I1124 13:15:13.105596  136968 client.go:176] duration metric: took 16.242235569s to LocalClient.Create
	I1124 13:15:13.105617  136968 start.go:167] duration metric: took 16.242291354s to libmachine.API.Create "addons-377447"
	I1124 13:15:13.105625  136968 start.go:293] postStartSetup for "addons-377447" (driver="kvm2")
	I1124 13:15:13.105637  136968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:15:13.105735  136968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:15:13.108746  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.109171  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:13.109197  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.109372  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:13.192705  136968 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:15:13.197576  136968 info.go:137] Remote host: Buildroot 2025.02
	I1124 13:15:13.197618  136968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-132228/.minikube/addons for local assets ...
	I1124 13:15:13.197704  136968 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-132228/.minikube/files for local assets ...
	I1124 13:15:13.197733  136968 start.go:296] duration metric: took 92.101589ms for postStartSetup
	I1124 13:15:13.200664  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.201022  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:13.201043  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.201227  136968 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/config.json ...
	I1124 13:15:13.201398  136968 start.go:128] duration metric: took 16.339522444s to createHost
	I1124 13:15:13.203380  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.203694  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:13.203726  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.203870  136968 main.go:143] libmachine: Using SSH client type: native
	I1124 13:15:13.204058  136968 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I1124 13:15:13.204068  136968 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 13:15:13.308702  136968 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763990113.267077392
	
	I1124 13:15:13.308729  136968 fix.go:216] guest clock: 1763990113.267077392
	I1124 13:15:13.308736  136968 fix.go:229] Guest: 2025-11-24 13:15:13.267077392 +0000 UTC Remote: 2025-11-24 13:15:13.201410305 +0000 UTC m=+16.438422616 (delta=65.667087ms)
	I1124 13:15:13.308753  136968 fix.go:200] guest clock delta is within tolerance: 65.667087ms
	I1124 13:15:13.308759  136968 start.go:83] releasing machines lock for "addons-377447", held for 16.4469592s
	I1124 13:15:13.311331  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.311639  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:13.311658  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.312179  136968 ssh_runner.go:195] Run: cat /version.json
	I1124 13:15:13.312279  136968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:15:13.315954  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.316440  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.317307  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:13.317352  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.317357  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:13.317394  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:13.317530  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:13.317735  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:13.394090  136968 ssh_runner.go:195] Run: systemctl --version
	I1124 13:15:13.417958  136968 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:15:13.569460  136968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:15:13.575760  136968 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:15:13.575830  136968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:15:13.593900  136968 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:15:13.593940  136968 start.go:496] detecting cgroup driver to use...
	I1124 13:15:13.594020  136968 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:15:13.612164  136968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:15:13.626829  136968 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:15:13.626880  136968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:15:13.644182  136968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:15:13.659776  136968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:15:13.797994  136968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:15:14.000155  136968 docker.go:234] disabling docker service ...
	I1124 13:15:14.000241  136968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:15:14.015735  136968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:15:14.029943  136968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:15:14.179841  136968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:15:14.320267  136968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:15:14.335925  136968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:15:14.357928  136968 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:15:14.358024  136968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:15:14.369644  136968 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 13:15:14.369700  136968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:15:14.381482  136968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:15:14.392929  136968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:15:14.404259  136968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:15:14.416293  136968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:15:14.427728  136968 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:15:14.446438  136968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:15:14.457598  136968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:15:14.467193  136968 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 13:15:14.467239  136968 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 13:15:14.486269  136968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:15:14.496799  136968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:15:14.633398  136968 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:15:15.070386  136968 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:15:15.070498  136968 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:15:15.075697  136968 start.go:564] Will wait 60s for crictl version
	I1124 13:15:15.075774  136968 ssh_runner.go:195] Run: which crictl
	I1124 13:15:15.079810  136968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 13:15:15.115413  136968 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 13:15:15.115560  136968 ssh_runner.go:195] Run: crio --version
	I1124 13:15:15.143019  136968 ssh_runner.go:195] Run: crio --version
	I1124 13:15:15.190879  136968 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1124 13:15:15.277271  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:15.277738  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:15.277773  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:15.277984  136968 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1124 13:15:15.282754  136968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:15:15.297969  136968 kubeadm.go:884] updating cluster {Name:addons-377447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-377447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:15:15.298154  136968 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:15:15.298203  136968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:15:15.326958  136968 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 13:15:15.327031  136968 ssh_runner.go:195] Run: which lz4
	I1124 13:15:15.331438  136968 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 13:15:15.336129  136968 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1124 13:15:15.336160  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1124 13:15:16.644587  136968 crio.go:462] duration metric: took 1.313179234s to copy over tarball
	I1124 13:15:16.644663  136968 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 13:15:18.204657  136968 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.559963007s)
	I1124 13:15:18.204695  136968 crio.go:469] duration metric: took 1.560076309s to extract the tarball
	I1124 13:15:18.204705  136968 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1124 13:15:18.247504  136968 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:15:18.284669  136968 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:15:18.284693  136968 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:15:18.284701  136968 kubeadm.go:935] updating node { 192.168.39.2 8443 v1.34.1 crio true true} ...
	I1124 13:15:18.284795  136968 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-377447 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-377447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:15:18.284862  136968 ssh_runner.go:195] Run: crio config
	I1124 13:15:18.331090  136968 cni.go:84] Creating CNI manager for ""
	I1124 13:15:18.331132  136968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 13:15:18.331153  136968 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:15:18.331190  136968 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-377447 NodeName:addons-377447 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:15:18.331365  136968 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-377447"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:15:18.331443  136968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:15:18.343734  136968 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:15:18.343801  136968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:15:18.355097  136968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1124 13:15:18.373887  136968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:15:18.393068  136968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1124 13:15:18.411858  136968 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:15:18.415809  136968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:15:18.430696  136968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:15:18.576328  136968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:15:18.605556  136968 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447 for IP: 192.168.39.2
	I1124 13:15:18.605583  136968 certs.go:195] generating shared ca certs ...
	I1124 13:15:18.605606  136968 certs.go:227] acquiring lock for ca certs: {Name:mkb6ec2dec3468295f1184b421b26a51902e7ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.606485  136968 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-132228/.minikube/ca.key
	I1124 13:15:18.658593  136968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt ...
	I1124 13:15:18.658625  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt: {Name:mk5cc797c50cac78fcd9580992635136840695a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.659517  136968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-132228/.minikube/ca.key ...
	I1124 13:15:18.659543  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/ca.key: {Name:mk826d3017b23e3acdce29a951dc971a39b8a638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.660184  136968 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.key
	I1124 13:15:18.733475  136968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.crt ...
	I1124 13:15:18.733506  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.crt: {Name:mk19d7ce85dae636d1cd8f445f2038af518d6594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.733678  136968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.key ...
	I1124 13:15:18.733690  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.key: {Name:mk912c6a4463e3416f10db69af91f6143548eb59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.734467  136968 certs.go:257] generating profile certs ...
	I1124 13:15:18.734534  136968 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.key
	I1124 13:15:18.734549  136968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt with IP's: []
	I1124 13:15:18.760983  136968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt ...
	I1124 13:15:18.761010  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: {Name:mkaba231a70cd14b6ce653dab09569afc7013a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.761188  136968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.key ...
	I1124 13:15:18.761201  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.key: {Name:mka28b62d840ae22c6305a5e4c0a048186cf314f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.761278  136968 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.key.d41fa9f7
	I1124 13:15:18.761296  136968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.crt.d41fa9f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2]
	I1124 13:15:18.858364  136968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.crt.d41fa9f7 ...
	I1124 13:15:18.858402  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.crt.d41fa9f7: {Name:mk39df75849f59db608952d9502780a9e5ae6346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.858594  136968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.key.d41fa9f7 ...
	I1124 13:15:18.858611  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.key.d41fa9f7: {Name:mkf5ab22a90b33125aca545d5fe91c28822cd4a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.858705  136968 certs.go:382] copying /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.crt.d41fa9f7 -> /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.crt
	I1124 13:15:18.858789  136968 certs.go:386] copying /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.key.d41fa9f7 -> /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.key
	I1124 13:15:18.858844  136968 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/proxy-client.key
	I1124 13:15:18.858865  136968 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/proxy-client.crt with IP's: []
	I1124 13:15:18.886540  136968 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/proxy-client.crt ...
	I1124 13:15:18.886569  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/proxy-client.crt: {Name:mk4f458f755818dcfdd4de90205e1060363bc530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.886739  136968 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/proxy-client.key ...
	I1124 13:15:18.886752  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/proxy-client.key: {Name:mk13ab70e71715d505bd55dc4f0d3f739ac890c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:18.887567  136968 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 13:15:18.887614  136968 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:15:18.887649  136968 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:15:18.887680  136968 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/key.pem (1675 bytes)
	I1124 13:15:18.888320  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:15:18.916995  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:15:18.944491  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:15:18.971990  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 13:15:18.999883  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 13:15:19.028634  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:15:19.057722  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:15:19.086858  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 13:15:19.114459  136968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:15:19.142044  136968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:15:19.161015  136968 ssh_runner.go:195] Run: openssl version
	I1124 13:15:19.167150  136968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:15:19.179226  136968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:15:19.184063  136968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:15 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:15:19.184153  136968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:15:19.191084  136968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:15:19.203057  136968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:15:19.207593  136968 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:15:19.207661  136968 kubeadm.go:401] StartCluster: {Name:addons-377447 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-377447 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:15:19.207754  136968 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:15:19.207829  136968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:15:19.238940  136968 cri.go:89] found id: ""
	I1124 13:15:19.239031  136968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:15:19.250828  136968 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:15:19.264067  136968 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:15:19.276056  136968 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:15:19.276085  136968 kubeadm.go:158] found existing configuration files:
	
	I1124 13:15:19.276166  136968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:15:19.288260  136968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:15:19.288345  136968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:15:19.299438  136968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:15:19.309866  136968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:15:19.309939  136968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:15:19.322576  136968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:15:19.332727  136968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:15:19.332802  136968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:15:19.343884  136968 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:15:19.354281  136968 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:15:19.354361  136968 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:15:19.365366  136968 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1124 13:15:19.499984  136968 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:15:29.982162  136968 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:15:29.982213  136968 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:15:29.982276  136968 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:15:29.982417  136968 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:15:29.982553  136968 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:15:29.982651  136968 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:15:29.984160  136968 out.go:252]   - Generating certificates and keys ...
	I1124 13:15:29.984247  136968 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:15:29.984331  136968 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:15:29.984432  136968 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:15:29.984531  136968 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:15:29.984611  136968 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:15:29.984687  136968 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:15:29.984763  136968 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:15:29.984902  136968 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-377447 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1124 13:15:29.984973  136968 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:15:29.985145  136968 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-377447 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I1124 13:15:29.985253  136968 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:15:29.985360  136968 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:15:29.985427  136968 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:15:29.985508  136968 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:15:29.985576  136968 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:15:29.985628  136968 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:15:29.985673  136968 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:15:29.985738  136968 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:15:29.985786  136968 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:15:29.985860  136968 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:15:29.985917  136968 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:15:29.987269  136968 out.go:252]   - Booting up control plane ...
	I1124 13:15:29.987354  136968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:15:29.987444  136968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:15:29.987515  136968 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:15:29.987597  136968 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:15:29.987673  136968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:15:29.987763  136968 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:15:29.987833  136968 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:15:29.987866  136968 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:15:29.987977  136968 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:15:29.988074  136968 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:15:29.988171  136968 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.044681ms
	I1124 13:15:29.988292  136968 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:15:29.988371  136968 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.2:8443/livez
	I1124 13:15:29.988454  136968 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:15:29.988535  136968 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:15:29.988601  136968 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.147860954s
	I1124 13:15:29.988662  136968 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.487213093s
	I1124 13:15:29.988733  136968 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501893503s
	I1124 13:15:29.988837  136968 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:15:29.988937  136968 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:15:29.988992  136968 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:15:29.989162  136968 kubeadm.go:319] [mark-control-plane] Marking the node addons-377447 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:15:29.989218  136968 kubeadm.go:319] [bootstrap-token] Using token: bn959t.xt4fak5rasws259g
	I1124 13:15:29.990470  136968 out.go:252]   - Configuring RBAC rules ...
	I1124 13:15:29.990586  136968 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:15:29.990680  136968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:15:29.990846  136968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:15:29.991005  136968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:15:29.991103  136968 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:15:29.991238  136968 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:15:29.991419  136968 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:15:29.991488  136968 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:15:29.991561  136968 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:15:29.991576  136968 kubeadm.go:319] 
	I1124 13:15:29.991662  136968 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:15:29.991674  136968 kubeadm.go:319] 
	I1124 13:15:29.991773  136968 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:15:29.991783  136968 kubeadm.go:319] 
	I1124 13:15:29.991808  136968 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:15:29.991921  136968 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:15:29.991980  136968 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:15:29.991986  136968 kubeadm.go:319] 
	I1124 13:15:29.992030  136968 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:15:29.992036  136968 kubeadm.go:319] 
	I1124 13:15:29.992087  136968 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:15:29.992101  136968 kubeadm.go:319] 
	I1124 13:15:29.992188  136968 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:15:29.992305  136968 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:15:29.992379  136968 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:15:29.992389  136968 kubeadm.go:319] 
	I1124 13:15:29.992494  136968 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:15:29.992621  136968 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:15:29.992632  136968 kubeadm.go:319] 
	I1124 13:15:29.992706  136968 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token bn959t.xt4fak5rasws259g \
	I1124 13:15:29.992849  136968 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ebe8dbc752ced325bd20d28582dd5d68e7035ef277b086188bf2f71fe68c8d00 \
	I1124 13:15:29.992891  136968 kubeadm.go:319] 	--control-plane 
	I1124 13:15:29.992900  136968 kubeadm.go:319] 
	I1124 13:15:29.993010  136968 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:15:29.993023  136968 kubeadm.go:319] 
	I1124 13:15:29.993141  136968 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token bn959t.xt4fak5rasws259g \
	I1124 13:15:29.993261  136968 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ebe8dbc752ced325bd20d28582dd5d68e7035ef277b086188bf2f71fe68c8d00 
	I1124 13:15:29.993290  136968 cni.go:84] Creating CNI manager for ""
	I1124 13:15:29.993299  136968 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 13:15:29.995507  136968 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 13:15:29.996671  136968 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 13:15:30.012050  136968 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 13:15:30.036807  136968 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:15:30.036913  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:30.037006  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-377447 minikube.k8s.io/updated_at=2025_11_24T13_15_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=addons-377447 minikube.k8s.io/primary=true
	I1124 13:15:30.065097  136968 ops.go:34] apiserver oom_adj: -16
	I1124 13:15:30.146846  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:30.647289  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:31.147224  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:31.647243  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:32.147938  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:32.647766  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:33.147256  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:33.646988  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:34.147559  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:34.647696  136968 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:15:34.790646  136968 kubeadm.go:1114] duration metric: took 4.753810505s to wait for elevateKubeSystemPrivileges
	I1124 13:15:34.790695  136968 kubeadm.go:403] duration metric: took 15.583039561s to StartCluster
	I1124 13:15:34.790726  136968 settings.go:142] acquiring lock: {Name:mk1b72f2bf40456dafe7bf268d29a6f5461b2aa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:34.791275  136968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 13:15:34.791811  136968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/kubeconfig: {Name:mk8ced9b1c350dbdaec836e11cf0177ea98a374d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:15:34.792058  136968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:15:34.792077  136968 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:15:34.792173  136968 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 13:15:34.792312  136968 addons.go:70] Setting yakd=true in profile "addons-377447"
	I1124 13:15:34.792325  136968 addons.go:70] Setting inspektor-gadget=true in profile "addons-377447"
	I1124 13:15:34.792347  136968 addons.go:239] Setting addon inspektor-gadget=true in "addons-377447"
	I1124 13:15:34.792349  136968 addons.go:239] Setting addon yakd=true in "addons-377447"
	I1124 13:15:34.792343  136968 addons.go:70] Setting registry-creds=true in profile "addons-377447"
	I1124 13:15:34.792364  136968 addons.go:239] Setting addon registry-creds=true in "addons-377447"
	I1124 13:15:34.792366  136968 config.go:182] Loaded profile config "addons-377447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:15:34.792385  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.792397  136968 addons.go:70] Setting storage-provisioner=true in profile "addons-377447"
	I1124 13:15:34.792409  136968 addons.go:70] Setting metrics-server=true in profile "addons-377447"
	I1124 13:15:34.792413  136968 addons.go:239] Setting addon storage-provisioner=true in "addons-377447"
	I1124 13:15:34.792416  136968 addons.go:70] Setting ingress=true in profile "addons-377447"
	I1124 13:15:34.792421  136968 addons.go:239] Setting addon metrics-server=true in "addons-377447"
	I1124 13:15:34.792405  136968 addons.go:70] Setting default-storageclass=true in profile "addons-377447"
	I1124 13:15:34.792429  136968 addons.go:239] Setting addon ingress=true in "addons-377447"
	I1124 13:15:34.792427  136968 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-377447"
	I1124 13:15:34.792437  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.792441  136968 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-377447"
	I1124 13:15:34.792443  136968 addons.go:70] Setting registry=true in profile "addons-377447"
	I1124 13:15:34.792449  136968 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-377447"
	I1124 13:15:34.792456  136968 addons.go:239] Setting addon registry=true in "addons-377447"
	I1124 13:15:34.792483  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.792484  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.792490  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.792721  136968 addons.go:70] Setting gcp-auth=true in profile "addons-377447"
	I1124 13:15:34.792765  136968 mustload.go:66] Loading cluster: addons-377447
	I1124 13:15:34.792853  136968 addons.go:70] Setting volcano=true in profile "addons-377447"
	I1124 13:15:34.792881  136968 addons.go:239] Setting addon volcano=true in "addons-377447"
	I1124 13:15:34.792906  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.792962  136968 config.go:182] Loaded profile config "addons-377447": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:15:34.792385  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.792400  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.793787  136968 addons.go:70] Setting cloud-spanner=true in profile "addons-377447"
	I1124 13:15:34.793809  136968 addons.go:239] Setting addon cloud-spanner=true in "addons-377447"
	I1124 13:15:34.793833  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.793845  136968 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-377447"
	I1124 13:15:34.793897  136968 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-377447"
	I1124 13:15:34.792436  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.793923  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.794199  136968 out.go:179] * Verifying Kubernetes components...
	I1124 13:15:34.794536  136968 addons.go:70] Setting volumesnapshots=true in profile "addons-377447"
	I1124 13:15:34.794558  136968 addons.go:239] Setting addon volumesnapshots=true in "addons-377447"
	I1124 13:15:34.794585  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.794687  136968 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-377447"
	I1124 13:15:34.794706  136968 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-377447"
	I1124 13:15:34.794726  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.794849  136968 addons.go:70] Setting ingress-dns=true in profile "addons-377447"
	I1124 13:15:34.794873  136968 addons.go:239] Setting addon ingress-dns=true in "addons-377447"
	I1124 13:15:34.794908  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.795295  136968 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-377447"
	I1124 13:15:34.795321  136968 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-377447"
	I1124 13:15:34.795412  136968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:15:34.799683  136968 addons.go:239] Setting addon default-storageclass=true in "addons-377447"
	I1124 13:15:34.799719  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.799890  136968 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 13:15:34.799912  136968 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 13:15:34.799890  136968 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 13:15:34.800348  136968 host.go:66] Checking if "addons-377447" exists ...
	W1124 13:15:34.800639  136968 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 13:15:34.801167  136968 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:15:34.801230  136968 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:15:34.801244  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 13:15:34.801320  136968 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:15:34.801603  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 13:15:34.802760  136968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:15:34.802811  136968 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 13:15:34.802908  136968 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 13:15:34.802948  136968 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 13:15:34.802955  136968 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 13:15:34.803823  136968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 13:15:34.803834  136968 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 13:15:34.802976  136968 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:15:34.804486  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:15:34.804516  136968 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-377447"
	I1124 13:15:34.804548  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:34.802971  136968 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 13:15:34.803850  136968 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 13:15:34.804385  136968 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:15:34.805167  136968 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:15:34.807912  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.808227  136968 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 13:15:34.808228  136968 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 13:15:34.808572  136968 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 13:15:34.808276  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.808364  136968 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:15:34.808753  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 13:15:34.808366  136968 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 13:15:34.808889  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 13:15:34.808943  136968 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 13:15:34.808973  136968 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 13:15:34.808986  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 13:15:34.809008  136968 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 13:15:34.808956  136968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:15:34.808528  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.809452  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.808965  136968 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 13:15:34.809807  136968 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:15:34.809824  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 13:15:34.810015  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.810070  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.810008  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.810202  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.810507  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.810533  136968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 13:15:34.810489  136968 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:15:34.811272  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 13:15:34.810558  136968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 13:15:34.811432  136968 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 13:15:34.810805  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.811472  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.811505  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.811673  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.811715  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.812085  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.812311  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.812452  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.813657  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.813688  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.814345  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.814526  136968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 13:15:34.814807  136968 out.go:179]   - Using image docker.io/busybox:stable
	I1124 13:15:34.815890  136968 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:15:34.815906  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 13:15:34.815927  136968 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 13:15:34.816014  136968 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:15:34.816024  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 13:15:34.817503  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.817757  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.817974  136968 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 13:15:34.818162  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.818378  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.818550  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.818587  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.818608  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.818608  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.818640  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.819000  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.819330  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.819375  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.819437  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.819466  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.819525  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.819585  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.819906  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.819980  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.820016  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.820043  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.820069  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.820166  136968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 13:15:34.820680  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.820943  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.820977  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.821015  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.821043  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.821254  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.821451  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.822363  136968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 13:15:34.822500  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.822725  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.822908  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.822936  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.823124  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.823153  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.823182  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.823351  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:34.824424  136968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 13:15:34.825597  136968 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 13:15:34.826590  136968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 13:15:34.826602  136968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 13:15:34.828996  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.829389  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:34.829411  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:34.829567  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	W1124 13:15:34.980595  136968 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36402->192.168.39.2:22: read: connection reset by peer
	I1124 13:15:34.980629  136968 retry.go:31] will retry after 347.28143ms: ssh: handshake failed: read tcp 192.168.39.1:36402->192.168.39.2:22: read: connection reset by peer
	W1124 13:15:34.980680  136968 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36406->192.168.39.2:22: read: connection reset by peer
	I1124 13:15:34.980685  136968 retry.go:31] will retry after 366.415496ms: ssh: handshake failed: read tcp 192.168.39.1:36406->192.168.39.2:22: read: connection reset by peer
	W1124 13:15:35.042648  136968 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36436->192.168.39.2:22: read: connection reset by peer
	I1124 13:15:35.042680  136968 retry.go:31] will retry after 225.553259ms: ssh: handshake failed: read tcp 192.168.39.1:36436->192.168.39.2:22: read: connection reset by peer
	I1124 13:15:35.344466  136968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:15:35.344510  136968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:15:35.534524  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:15:35.575415  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 13:15:35.693576  136968 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 13:15:35.693603  136968 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 13:15:35.695791  136968 node_ready.go:35] waiting up to 6m0s for node "addons-377447" to be "Ready" ...
	I1124 13:15:35.701296  136968 node_ready.go:49] node "addons-377447" is "Ready"
	I1124 13:15:35.701338  136968 node_ready.go:38] duration metric: took 5.507154ms for node "addons-377447" to be "Ready" ...
	I1124 13:15:35.701359  136968 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:15:35.701427  136968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:15:35.811985  136968 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 13:15:35.812006  136968 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 13:15:35.813585  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:15:35.820138  136968 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 13:15:35.820174  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 13:15:35.825452  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:15:35.872366  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:15:35.969544  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:15:36.023646  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:15:36.042488  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:15:36.058609  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:15:36.257862  136968 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:15:36.257895  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 13:15:36.344225  136968 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 13:15:36.344251  136968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 13:15:36.364934  136968 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 13:15:36.364961  136968 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 13:15:36.393239  136968 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 13:15:36.393278  136968 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 13:15:36.447013  136968 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 13:15:36.447050  136968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 13:15:36.479239  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:15:36.705126  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:15:36.725100  136968 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 13:15:36.725152  136968 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 13:15:36.746195  136968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 13:15:36.746230  136968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 13:15:36.794463  136968 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:15:36.794502  136968 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 13:15:36.869154  136968 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 13:15:36.869214  136968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 13:15:37.035844  136968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 13:15:37.035883  136968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 13:15:37.068877  136968 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:15:37.068903  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 13:15:37.166300  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:15:37.247885  136968 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 13:15:37.247926  136968 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 13:15:37.258407  136968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 13:15:37.258438  136968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 13:15:37.418397  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:15:37.656505  136968 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 13:15:37.656539  136968 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 13:15:37.686809  136968 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 13:15:37.686849  136968 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 13:15:37.988359  136968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 13:15:37.988385  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 13:15:38.003450  136968 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:15:38.003481  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 13:15:38.180807  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:15:38.283072  136968 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.938543327s)
	I1124 13:15:38.283135  136968 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1124 13:15:38.348509  136968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 13:15:38.348538  136968 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 13:15:38.724788  136968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 13:15:38.724812  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 13:15:38.788926  136968 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-377447" context rescaled to 1 replicas
	I1124 13:15:39.184659  136968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 13:15:39.184689  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 13:15:39.479684  136968 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:15:39.479728  136968 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 13:15:39.763378  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:15:40.377054  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.801596039s)
	I1124 13:15:40.377163  136968 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.675705539s)
	I1124 13:15:40.377207  136968 api_server.go:72] duration metric: took 5.585100374s to wait for apiserver process to appear ...
	I1124 13:15:40.377219  136968 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:15:40.377244  136968 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I1124 13:15:40.385406  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.850833381s)
	I1124 13:15:40.396721  136968 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I1124 13:15:40.399505  136968 api_server.go:141] control plane version: v1.34.1
	I1124 13:15:40.399551  136968 api_server.go:131] duration metric: took 22.323123ms to wait for apiserver health ...
	I1124 13:15:40.399568  136968 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:15:40.418074  136968 system_pods.go:59] 13 kube-system pods found
	I1124 13:15:40.418149  136968 system_pods.go:61] "amd-gpu-device-plugin-pkczz" [02dfe46b-55f9-4ba4-b9df-1600e6cc125f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:40.418182  136968 system_pods.go:61] "coredns-66bc5c9577-6zxkz" [f4c583e8-4930-4c8a-b9af-c95ff7a30529] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:40.418194  136968 system_pods.go:61] "coredns-66bc5c9577-76tbp" [3d201271-bd30-4409-b8f6-116994398008] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:40.418199  136968 system_pods.go:61] "etcd-addons-377447" [f11432ef-7fc9-4fde-a2ab-7a4f9f2cb66e] Running
	I1124 13:15:40.418205  136968 system_pods.go:61] "kube-apiserver-addons-377447" [d8c81786-f011-439f-a363-bb6a1197c8ea] Running
	I1124 13:15:40.418213  136968 system_pods.go:61] "kube-controller-manager-addons-377447" [342a2369-8135-4bd7-84ff-4d5f36b39e48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 13:15:40.418224  136968 system_pods.go:61] "kube-ingress-dns-minikube" [5185dc6e-a201-4717-9f3c-24e7ea61f5c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:40.418230  136968 system_pods.go:61] "kube-proxy-bvcds" [39664132-6873-4007-b90c-d8ee37a0ab04] Running
	I1124 13:15:40.418240  136968 system_pods.go:61] "kube-scheduler-addons-377447" [dfb59f58-071d-48ae-99ce-a211df9076ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 13:15:40.418248  136968 system_pods.go:61] "nvidia-device-plugin-daemonset-tdfqm" [d86dfc08-f0af-4c6a-a8ca-886da893bef3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:40.418260  136968 system_pods.go:61] "registry-6b586f9694-g2f52" [f6398e16-e752-4316-8684-c40140559c04] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:40.418270  136968 system_pods.go:61] "registry-creds-764b6fb674-8v746" [0d44dec7-4335-412e-8130-6330f45fcc91] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:40.418283  136968 system_pods.go:61] "registry-proxy-gtc9t" [81f6349a-0cc7-42a4-9413-1690879cf35e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:40.418292  136968 system_pods.go:74] duration metric: took 18.715587ms to wait for pod list to return data ...
	I1124 13:15:40.418306  136968 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:15:40.468973  136968 default_sa.go:45] found service account: "default"
	I1124 13:15:40.469007  136968 default_sa.go:55] duration metric: took 50.691788ms for default service account to be created ...
	I1124 13:15:40.469022  136968 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:15:40.525223  136968 system_pods.go:86] 13 kube-system pods found
	I1124 13:15:40.525259  136968 system_pods.go:89] "amd-gpu-device-plugin-pkczz" [02dfe46b-55f9-4ba4-b9df-1600e6cc125f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:40.525267  136968 system_pods.go:89] "coredns-66bc5c9577-6zxkz" [f4c583e8-4930-4c8a-b9af-c95ff7a30529] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:40.525286  136968 system_pods.go:89] "coredns-66bc5c9577-76tbp" [3d201271-bd30-4409-b8f6-116994398008] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:40.525293  136968 system_pods.go:89] "etcd-addons-377447" [f11432ef-7fc9-4fde-a2ab-7a4f9f2cb66e] Running
	I1124 13:15:40.525300  136968 system_pods.go:89] "kube-apiserver-addons-377447" [d8c81786-f011-439f-a363-bb6a1197c8ea] Running
	I1124 13:15:40.525309  136968 system_pods.go:89] "kube-controller-manager-addons-377447" [342a2369-8135-4bd7-84ff-4d5f36b39e48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 13:15:40.525318  136968 system_pods.go:89] "kube-ingress-dns-minikube" [5185dc6e-a201-4717-9f3c-24e7ea61f5c4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:40.525322  136968 system_pods.go:89] "kube-proxy-bvcds" [39664132-6873-4007-b90c-d8ee37a0ab04] Running
	I1124 13:15:40.525330  136968 system_pods.go:89] "kube-scheduler-addons-377447" [dfb59f58-071d-48ae-99ce-a211df9076ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 13:15:40.525337  136968 system_pods.go:89] "nvidia-device-plugin-daemonset-tdfqm" [d86dfc08-f0af-4c6a-a8ca-886da893bef3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:40.525343  136968 system_pods.go:89] "registry-6b586f9694-g2f52" [f6398e16-e752-4316-8684-c40140559c04] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:40.525351  136968 system_pods.go:89] "registry-creds-764b6fb674-8v746" [0d44dec7-4335-412e-8130-6330f45fcc91] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:40.525356  136968 system_pods.go:89] "registry-proxy-gtc9t" [81f6349a-0cc7-42a4-9413-1690879cf35e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:40.525368  136968 system_pods.go:126] duration metric: took 56.336922ms to wait for k8s-apps to be running ...
	I1124 13:15:40.525382  136968 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:15:40.525438  136968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:15:42.281569  136968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 13:15:42.284495  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:42.285088  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:42.285137  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:42.285354  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:42.467735  136968 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 13:15:42.531561  136968 addons.go:239] Setting addon gcp-auth=true in "addons-377447"
	I1124 13:15:42.531616  136968 host.go:66] Checking if "addons-377447" exists ...
	I1124 13:15:42.533755  136968 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 13:15:42.536365  136968 main.go:143] libmachine: domain addons-377447 has defined MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:42.536770  136968 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:33:79", ip: ""} in network mk-addons-377447: {Iface:virbr1 ExpiryTime:2025-11-24 14:15:11 +0000 UTC Type:0 Mac:52:54:00:50:33:79 Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-377447 Clientid:01:52:54:00:50:33:79}
	I1124 13:15:42.536799  136968 main.go:143] libmachine: domain addons-377447 has defined IP address 192.168.39.2 and MAC address 52:54:00:50:33:79 in network mk-addons-377447
	I1124 13:15:42.536937  136968 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/addons-377447/id_rsa Username:docker}
	I1124 13:15:43.722621  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.909001936s)
	I1124 13:15:43.722669  136968 addons.go:495] Verifying addon ingress=true in "addons-377447"
	I1124 13:15:43.722690  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.897202353s)
	I1124 13:15:43.722736  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.850336189s)
	I1124 13:15:43.722855  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.680340602s)
	I1124 13:15:43.722940  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.664305991s)
	I1124 13:15:43.722773  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.753187589s)
	I1124 13:15:43.722995  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.24372259s)
	I1124 13:15:43.723056  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.01789753s)
	I1124 13:15:43.723076  136968 addons.go:495] Verifying addon registry=true in "addons-377447"
	I1124 13:15:43.722784  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.699108706s)
	I1124 13:15:43.723232  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.556899612s)
	I1124 13:15:43.723254  136968 addons.go:495] Verifying addon metrics-server=true in "addons-377447"
	I1124 13:15:43.723321  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.30488343s)
	I1124 13:15:43.723425  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.542573914s)
	W1124 13:15:43.723463  136968 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:15:43.723504  136968 retry.go:31] will retry after 308.423472ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:15:43.724349  136968 out.go:179] * Verifying ingress addon...
	I1124 13:15:43.724351  136968 out.go:179] * Verifying registry addon...
	I1124 13:15:43.725111  136968 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-377447 service yakd-dashboard -n yakd-dashboard
	
	I1124 13:15:43.726491  136968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 13:15:43.726636  136968 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 13:15:43.756578  136968 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 13:15:43.756613  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:43.756578  136968 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 13:15:43.756632  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1124 13:15:43.772549  136968 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1124 13:15:44.033081  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:15:44.255730  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:44.255854  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:44.408049  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.64460362s)
	I1124 13:15:44.408125  136968 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-377447"
	I1124 13:15:44.408180  136968 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.882714988s)
	I1124 13:15:44.408218  136968 system_svc.go:56] duration metric: took 3.882828141s WaitForService to wait for kubelet
	I1124 13:15:44.408275  136968 kubeadm.go:587] duration metric: took 9.616163484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:15:44.408303  136968 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:15:44.408225  136968 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.874445423s)
	I1124 13:15:44.409538  136968 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 13:15:44.410270  136968 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 13:15:44.411588  136968 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:15:44.411608  136968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 13:15:44.412755  136968 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 13:15:44.412779  136968 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 13:15:44.428759  136968 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 13:15:44.428785  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:44.435205  136968 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 13:15:44.435233  136968 node_conditions.go:123] node cpu capacity is 2
	I1124 13:15:44.435249  136968 node_conditions.go:105] duration metric: took 26.940032ms to run NodePressure ...
	I1124 13:15:44.435263  136968 start.go:242] waiting for startup goroutines ...
	I1124 13:15:44.482091  136968 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 13:15:44.482135  136968 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 13:15:44.541694  136968 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:15:44.541722  136968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 13:15:44.625640  136968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:15:44.734308  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:44.734397  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:44.922738  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:45.232859  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:45.234203  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.416606  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:45.748178  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:45.749890  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.834162  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.800996055s)
	I1124 13:15:45.943923  136968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.318228185s)
	I1124 13:15:45.944946  136968 addons.go:495] Verifying addon gcp-auth=true in "addons-377447"
	I1124 13:15:45.946409  136968 out.go:179] * Verifying gcp-auth addon...
	I1124 13:15:45.948040  136968 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 13:15:45.951947  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:46.011905  136968 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 13:15:46.011933  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:46.235792  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:46.241502  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:46.419060  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:46.453764  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:46.734722  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:46.735154  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:46.916199  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:46.951689  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:47.233963  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:47.234280  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:47.416885  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:47.451819  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:47.732372  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:47.732493  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:47.916751  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:47.951885  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:48.232364  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.232470  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:48.417469  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:48.452339  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:48.731220  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:48.731546  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.918640  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:48.952486  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:49.235579  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.235602  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:49.415569  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:49.517383  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:49.730234  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.730703  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:49.915051  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:49.952133  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:50.231074  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:50.231318  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:50.443563  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:50.450785  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:50.730897  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:50.730938  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:50.915732  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:50.951537  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:51.230499  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:51.230979  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:51.417868  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:51.452070  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:51.730370  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:51.732593  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:51.915358  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:51.951021  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:52.235049  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:52.235078  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:52.415872  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:52.451528  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:52.730237  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:52.730749  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:52.915738  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:52.951249  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:53.231341  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:53.231355  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:53.415706  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:53.450802  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:53.730357  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:53.730670  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:53.917773  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:53.951494  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:54.231912  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:54.232228  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:54.415910  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:54.451519  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:54.731175  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:54.731476  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:54.915663  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:54.952841  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:55.384066  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:55.384601  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:55.417305  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:55.454067  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:55.732500  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:55.732943  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:55.916577  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:55.950942  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:56.234219  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:56.235082  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:56.415607  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:56.451633  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:56.730276  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:56.730488  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:56.916256  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:56.950994  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:57.230805  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:57.230805  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:57.418067  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:57.453601  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:57.733188  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:57.733318  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:57.919500  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:57.952850  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:58.231473  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:58.234017  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:58.418322  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:58.451938  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:58.731652  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:58.732085  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:58.916124  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:58.951737  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:59.230171  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:59.231706  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:59.415839  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:59.450807  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:59.730276  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:59.730815  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:59.915352  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:59.951520  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:00.230764  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:00.230978  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:00.416150  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:00.453087  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:00.731102  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:00.731192  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:00.915777  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:00.951156  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:01.231833  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:01.232706  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:01.422471  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:01.454086  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:01.733467  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:01.733587  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:01.916234  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:01.950860  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:02.231041  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:02.233036  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:02.415335  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:02.450769  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:02.732234  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:02.732495  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:02.917191  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:02.953717  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:03.231078  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:03.233097  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:03.415924  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:03.452590  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:03.730344  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:03.734598  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:03.918095  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:03.952857  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:04.230716  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:04.230753  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:04.417826  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:04.455264  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:04.733337  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:04.733903  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:04.916399  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:04.952603  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:05.232680  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:05.234732  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:05.418669  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:05.523364  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:05.731502  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:05.732782  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:05.915474  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:05.951926  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:06.231953  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:06.232961  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:06.417442  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:06.451055  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:06.731606  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:06.732869  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:06.917529  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:06.952963  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:07.232290  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:07.235351  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:07.417396  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:07.453597  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:07.921323  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:08.225918  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:08.226448  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:08.226902  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:08.230784  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:08.232650  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:08.417600  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:08.451256  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:08.733864  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:08.733911  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:08.916339  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:08.951268  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:09.231852  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:09.233092  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:09.417341  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:09.451723  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:09.730792  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:09.731200  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:09.915998  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:09.954869  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:10.230303  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:10.230491  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:10.416520  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:10.451654  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:10.729810  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:10.730217  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:10.918137  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:10.953360  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:11.229773  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:11.229796  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:11.414657  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:11.451587  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:11.730362  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:11.731098  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:11.916697  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:11.952538  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:12.231745  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:12.231804  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:12.416490  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:12.450945  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:12.729956  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:12.731397  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:12.916905  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:12.952121  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:13.231036  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:13.231275  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:13.415819  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:13.451319  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:13.729628  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:13.730586  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:13.916216  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:13.950895  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:14.231134  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:14.231230  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:14.416338  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:14.451297  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:14.730470  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:14.732782  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:14.916782  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:14.952606  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:15.232586  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:15.232593  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:15.417554  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:15.454155  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:15.731848  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:15.732747  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:15.916367  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:15.951186  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:16.233386  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:16.233474  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:16.416918  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:16.455825  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:16.730657  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:16.731295  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:16:16.916601  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:16.951643  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:17.234037  136968 kapi.go:107] duration metric: took 33.507541569s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 13:16:17.234198  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:17.418190  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:17.451561  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:17.729872  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:17.915874  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:17.954042  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:18.232364  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:18.418462  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:18.450874  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:18.730406  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:18.917494  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:18.951567  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:19.237171  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:19.416198  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:19.451214  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:19.731095  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:19.915855  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:19.951603  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:20.230572  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:20.417521  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:20.454479  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:20.731522  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:20.917881  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:20.951348  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:21.232976  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:21.415815  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:21.452252  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:21.732676  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:21.916960  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:21.952823  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:22.237818  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:22.415557  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:22.452216  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:22.731329  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:22.918525  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:22.952357  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:23.231080  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:23.415526  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:23.451293  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:23.730676  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:23.917139  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:23.951217  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:24.231640  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:24.415446  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:24.451259  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:24.730727  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:24.917227  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:24.951313  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:25.231215  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:25.417963  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:25.453395  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:25.730651  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:25.917157  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:25.953237  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:26.231781  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:26.422303  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:26.453014  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:26.730666  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:26.915708  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:26.954241  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:27.232514  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:27.416142  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:27.451933  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:27.730746  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:27.915959  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:27.951572  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:28.229661  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:28.420583  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:28.451317  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:28.732922  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:28.915470  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:28.951532  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:29.232083  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:29.418722  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:29.518597  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:29.729802  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:29.917959  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:29.954598  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:30.230497  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:30.416734  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:30.451459  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:30.729551  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:30.916227  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:30.950421  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:31.231579  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:31.416188  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:31.451560  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:31.731641  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:31.917845  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:31.952041  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:32.234644  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:32.417336  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:32.451173  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:32.732622  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:32.917125  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:32.952029  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:33.257165  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:33.423054  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:33.459444  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:33.732937  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:33.917014  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:33.955451  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:34.230150  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:34.417602  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:34.451710  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:34.731311  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:34.916053  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:34.952219  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:35.237460  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:35.422995  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:35.453637  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:35.731190  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:35.918076  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:35.953282  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:36.323179  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:36.418761  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:36.517816  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:36.730629  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:36.917114  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:36.952878  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:37.231076  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:37.415333  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:37.451151  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:37.732495  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:37.916819  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:37.951871  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:38.230810  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:38.417453  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:38.518416  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:38.732200  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:38.918347  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:38.952591  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:39.231078  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:39.417166  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:39.453754  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:39.730814  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:39.915227  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:39.954921  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:40.230952  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:40.415889  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:40.454435  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:40.732206  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:40.915614  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:40.951049  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:41.233896  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:41.418195  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:41.456358  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:41.731199  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:41.916074  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:41.952788  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:42.230046  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:42.415891  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:42.455767  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:42.732724  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:42.914933  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:42.953359  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:43.232044  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:43.417070  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:43.455323  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:43.732472  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:43.920966  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:43.953842  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:44.230195  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:44.415804  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:44.517530  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:44.731993  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:44.916341  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:44.956403  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:45.233091  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:45.415389  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:45.450787  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:45.729909  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:45.917799  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:45.953133  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:46.230586  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:46.417125  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:46.452939  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:46.731184  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:46.918080  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:47.018555  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:47.231009  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:47.418234  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:47.453880  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:47.730540  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:47.916381  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:47.951774  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:48.232153  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:48.418390  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:48.451839  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:48.730986  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:48.920809  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:48.952561  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:49.229865  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:49.417697  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:16:49.455796  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:49.742733  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:49.918346  136968 kapi.go:107] duration metric: took 1m5.506731412s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 13:16:49.953929  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:50.231890  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:50.459625  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:50.736437  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:50.952825  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:51.236314  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:51.456723  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:51.732705  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:51.953198  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:52.232520  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:52.453936  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:52.735843  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:52.952681  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:53.231369  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:53.451851  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:53.730274  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:53.952282  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:54.238629  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:54.537118  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:54.732820  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:54.953066  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:55.230398  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:55.452092  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:55.730781  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:55.952532  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:56.230664  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:56.451615  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:56.732446  136968 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:16:56.953950  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:57.230863  136968 kapi.go:107] duration metric: took 1m13.504222828s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 13:16:57.452026  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:57.952147  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:58.452084  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:58.952000  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:59.453767  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:16:59.954600  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:00.453319  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:00.952668  136968 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:17:01.451771  136968 kapi.go:107] duration metric: took 1m15.503725807s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 13:17:01.453389  136968 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-377447 cluster.
	I1124 13:17:01.454433  136968 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 13:17:01.455515  136968 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 13:17:01.456750  136968 out.go:179] * Enabled addons: cloud-spanner, ingress-dns, registry-creds, storage-provisioner, inspektor-gadget, amd-gpu-device-plugin, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1124 13:17:01.457787  136968 addons.go:530] duration metric: took 1m26.665632381s for enable addons: enabled=[cloud-spanner ingress-dns registry-creds storage-provisioner inspektor-gadget amd-gpu-device-plugin nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1124 13:17:01.458405  136968 start.go:247] waiting for cluster config update ...
	I1124 13:17:01.458462  136968 start.go:256] writing updated cluster config ...
	I1124 13:17:01.458824  136968 ssh_runner.go:195] Run: rm -f paused
	I1124 13:17:01.466666  136968 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:17:01.552358  136968 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6zxkz" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:01.557017  136968 pod_ready.go:94] pod "coredns-66bc5c9577-6zxkz" is "Ready"
	I1124 13:17:01.557039  136968 pod_ready.go:86] duration metric: took 4.641282ms for pod "coredns-66bc5c9577-6zxkz" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:01.559102  136968 pod_ready.go:83] waiting for pod "etcd-addons-377447" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:01.563921  136968 pod_ready.go:94] pod "etcd-addons-377447" is "Ready"
	I1124 13:17:01.563947  136968 pod_ready.go:86] duration metric: took 4.810345ms for pod "etcd-addons-377447" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:01.566623  136968 pod_ready.go:83] waiting for pod "kube-apiserver-addons-377447" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:01.571341  136968 pod_ready.go:94] pod "kube-apiserver-addons-377447" is "Ready"
	I1124 13:17:01.571359  136968 pod_ready.go:86] duration metric: took 4.71333ms for pod "kube-apiserver-addons-377447" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:01.573254  136968 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-377447" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:01.871213  136968 pod_ready.go:94] pod "kube-controller-manager-addons-377447" is "Ready"
	I1124 13:17:01.871250  136968 pod_ready.go:86] duration metric: took 297.973392ms for pod "kube-controller-manager-addons-377447" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.072426  136968 pod_ready.go:83] waiting for pod "kube-proxy-bvcds" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.471454  136968 pod_ready.go:94] pod "kube-proxy-bvcds" is "Ready"
	I1124 13:17:02.471490  136968 pod_ready.go:86] duration metric: took 399.032742ms for pod "kube-proxy-bvcds" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:02.671801  136968 pod_ready.go:83] waiting for pod "kube-scheduler-addons-377447" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.070709  136968 pod_ready.go:94] pod "kube-scheduler-addons-377447" is "Ready"
	I1124 13:17:03.070745  136968 pod_ready.go:86] duration metric: took 398.916661ms for pod "kube-scheduler-addons-377447" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:17:03.070764  136968 pod_ready.go:40] duration metric: took 1.604042231s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:17:03.116564  136968 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 13:17:03.118052  136968 out.go:179] * Done! kubectl is now configured to use "addons-377447" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.266316900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=618d73c8-5ff5-4ded-9de7-ab77486cf343 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.266433401Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=618d73c8-5ff5-4ded-9de7-ab77486cf343 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.266820886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ab92c33447636a197aae8687e74a8f4279c5354b61d1b0d9d783752d23e3b69,PodSandboxId:6bfbf187bcd14ce5c838890d00de332d184069b89248d75ff5a36e8e372a2c36,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763990268300436911,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be5a4fcf-d0b1-4b78-b885-5735b908730d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5461cbcfdcaa07dcb29df80a96b6eb1987ec4730f75c86fd99c4f2f5b28c2f,PodSandboxId:f820956ecdaf7cb079857d152426f22274cb5539f59fab0ea80a4a02f3e75214,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763990227810637353,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de78fc6e-5604-4ab6-a3d1-77bc45527e8f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6704591932b8a605c83bc555115f59a5e8a3e2de593c46ad510b3e7f54df3c7,PodSandboxId:a811c5ebc2a75011a131dad5d274184e626816ee745a1af99f45ce18369a09fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763990216651596769,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-l44gl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9080109-fb66-42b7-aa87-20132ecddc2b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a63441c757398906d801753abc670bce8b363ad3f8b2bb736991356e1d53c72,PodSandboxId:f7e74cfb7cb36317627789b2ab56a570ac8435eec334d4c50db4b5b7e7baa476,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763990198287641909,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-v4c2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5993d23e-53b9-4c97-95e1-b1aa366e56d3,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a25a6fc92b7202964a9801c9592caed6aabb06d63a9d84a9e20311e90b8f31,PodSandboxId:546199e6ca0a8ddb69a96f65877b4d0c5e9436bf38ec5a4cb802dd9a117a0ed0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763990197701845498,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xvplk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc62b1d9-c6e4-43c7-8b5e-53aa6cbd8ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03870a0d3443cdd127f7b9c12f31ef053af491ec9d416b2c2b6c90fac1402010,PodSandboxId:8d9b5bde2d83eeab9a4271690e86e48056f39fd48556b81bd02ec709addf4a79,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763990192242420499,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-mcgnc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9ac8bcd4-5ae8-4118-abea-79210199083c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdcb36b1ae2bd6ade61ab01d7102542799004a38bfbcc8fb5f453e6c64f9e9a,PodSandboxId:ab42324dcefc11d8dfbb1fb1ed0e22c91e2bcd87ee46aa724f40a795d29614e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763990168324220589,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5185dc6e-a201-4717-9f3c-24e7ea61f5c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1dbf6d617cfc9f9f10d0958dc2f3543491feff6f98a0d7be9c5c116221edce,PodSandboxId:010673d7dc01355e433172ebc68f1f06ad77b5aa33f4934e2e65b6bc938a8cb8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763990152191247961,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkczz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dfe46b-55f9-4ba4-b9df-1600e6cc125f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fc553f7c2c997c967013cf53450ee9bb1079d0d755674582ead72403f048fd,PodSandboxId:cebcb76b140ca2179b245cbcf2319dab2c2e7afc43ef17227f6465f5ff8a1a14,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763990141889880218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf27732-0ab9-45ac-ac7e-2dfb8b9baa34,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d104badde195064278086381910b98cbe0366f2fe9a8473a623edf49e8583048,PodSandboxId:2aca29911f8e5ddd4186b6296c745b2dc91ba8d938632f59a3ce921e38634029,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763990135938032225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6zxkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c583e8-4930-4c8a-b9af-c95ff7a30529,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9a5082ca26580e2231bf47f011ad6735535d782619eb4dc124cd699d0e1f11,PodSandboxId:2e80c27fc34e78bf94e49ab16147868a1afcfdc45a1686ff2a39c3019ee90f53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763990135288587734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39664132-6873-4007-b90c-d8ee37a0ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac212d3d72b3204108fa0ce26e279e5824419c3ea7fb9ee43ca6422147a2748c,PodSandboxId:1cf8baee2f43930d12e2b82aacc2d19efc0f5d6c319aa3747f90118f8fa93d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763990123911618097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ac20284623d7f0b4623e0039b77601,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP
\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba5d243fa8c7cdcd902cef493e997e157acb1c079530a0f50d6c89c01ecc702,PodSandboxId:9eb833b40e6cb635ecf2717adf625ef485ba43311109e508e51a485e22f93303,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763990123884096663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e0d333dbb337b40761a6a8dec72db2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernete
s.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57438aa83b01c7be9011ba1ef1dcff502e16f10af50c66976ff3646e1576635e,PodSandboxId:6477feb996591e332afc2015498b35cc08f90a0e72ff59018c7e8618a849c0b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763990123864108375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a905652204d7c833d44755efe97faf,},Annot
ations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ba4c18345a2ea91e6c4cfc9f334a30119c5024ce4a46eb3a533cf630c7c900,PodSandboxId:4ab816f190188a27157de16cac222eeb4b8f593ce3788a328d8210f552bfeb77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763990123838413617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-contro
ller-manager-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59e69a5b8762288d240e7f1a2c7dc296,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=618d73c8-5ff5-4ded-9de7-ab77486cf343 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.298851089Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=162d08de-6018-45de-95f0-114e3804b8b2 name=/runtime.v1.RuntimeService/Version
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.298983787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=162d08de-6018-45de-95f0-114e3804b8b2 name=/runtime.v1.RuntimeService/Version
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.301052082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5629e0f9-e673-49f0-9788-6dddb99c3e62 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.303290574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763990410303209331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5629e0f9-e673-49f0-9788-6dddb99c3e62 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.305685675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6156e797-bc05-4e34-aa66-379757a0efb0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.305878169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6156e797-bc05-4e34-aa66-379757a0efb0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.306982907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ab92c33447636a197aae8687e74a8f4279c5354b61d1b0d9d783752d23e3b69,PodSandboxId:6bfbf187bcd14ce5c838890d00de332d184069b89248d75ff5a36e8e372a2c36,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763990268300436911,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be5a4fcf-d0b1-4b78-b885-5735b908730d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5461cbcfdcaa07dcb29df80a96b6eb1987ec4730f75c86fd99c4f2f5b28c2f,PodSandboxId:f820956ecdaf7cb079857d152426f22274cb5539f59fab0ea80a4a02f3e75214,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763990227810637353,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de78fc6e-5604-4ab6-a3d1-77bc45527e8f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6704591932b8a605c83bc555115f59a5e8a3e2de593c46ad510b3e7f54df3c7,PodSandboxId:a811c5ebc2a75011a131dad5d274184e626816ee745a1af99f45ce18369a09fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763990216651596769,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-l44gl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9080109-fb66-42b7-aa87-20132ecddc2b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a63441c757398906d801753abc670bce8b363ad3f8b2bb736991356e1d53c72,PodSandboxId:f7e74cfb7cb36317627789b2ab56a570ac8435eec334d4c50db4b5b7e7baa476,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763990198287641909,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-v4c2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5993d23e-53b9-4c97-95e1-b1aa366e56d3,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a25a6fc92b7202964a9801c9592caed6aabb06d63a9d84a9e20311e90b8f31,PodSandboxId:546199e6ca0a8ddb69a96f65877b4d0c5e9436bf38ec5a4cb802dd9a117a0ed0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763990197701845498,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xvplk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc62b1d9-c6e4-43c7-8b5e-53aa6cbd8ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03870a0d3443cdd127f7b9c12f31ef053af491ec9d416b2c2b6c90fac1402010,PodSandboxId:8d9b5bde2d83eeab9a4271690e86e48056f39fd48556b81bd02ec709addf4a79,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763990192242420499,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-mcgnc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9ac8bcd4-5ae8-4118-abea-79210199083c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdcb36b1ae2bd6ade61ab01d7102542799004a38bfbcc8fb5f453e6c64f9e9a,PodSandboxId:ab42324dcefc11d8dfbb1fb1ed0e22c91e2bcd87ee46aa724f40a795d29614e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763990168324220589,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5185dc6e-a201-4717-9f3c-24e7ea61f5c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1dbf6d617cfc9f9f10d0958dc2f3543491feff6f98a0d7be9c5c116221edce,PodSandboxId:010673d7dc01355e433172ebc68f1f06ad77b5aa33f4934e2e65b6bc938a8cb8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763990152191247961,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkczz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dfe46b-55f9-4ba4-b9df-1600e6cc125f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fc553f7c2c997c967013cf53450ee9bb1079d0d755674582ead72403f048fd,PodSandboxId:cebcb76b140ca2179b245cbcf2319dab2c2e7afc43ef17227f6465f5ff8a1a14,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763990141889880218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf27732-0ab9-45ac-ac7e-2dfb8b9baa34,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d104badde195064278086381910b98cbe0366f2fe9a8473a623edf49e8583048,PodSandboxId:2aca29911f8e5ddd4186b6296c745b2dc91ba8d938632f59a3ce921e38634029,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763990135938032225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6zxkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c583e8-4930-4c8a-b9af-c95ff7a30529,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9a5082ca26580e2231bf47f011ad6735535d782619eb4dc124cd699d0e1f11,PodSandboxId:2e80c27fc34e78bf94e49ab16147868a1afcfdc45a1686ff2a39c3019ee90f53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763990135288587734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39664132-6873-4007-b90c-d8ee37a0ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac212d3d72b3204108fa0ce26e279e5824419c3ea7fb9ee43ca6422147a2748c,PodSandboxId:1cf8baee2f43930d12e2b82aacc2d19efc0f5d6c319aa3747f90118f8fa93d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763990123911618097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ac20284623d7f0b4623e0039b77601,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP
\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba5d243fa8c7cdcd902cef493e997e157acb1c079530a0f50d6c89c01ecc702,PodSandboxId:9eb833b40e6cb635ecf2717adf625ef485ba43311109e508e51a485e22f93303,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763990123884096663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e0d333dbb337b40761a6a8dec72db2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernete
s.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57438aa83b01c7be9011ba1ef1dcff502e16f10af50c66976ff3646e1576635e,PodSandboxId:6477feb996591e332afc2015498b35cc08f90a0e72ff59018c7e8618a849c0b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763990123864108375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a905652204d7c833d44755efe97faf,},Annot
ations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ba4c18345a2ea91e6c4cfc9f334a30119c5024ce4a46eb3a533cf630c7c900,PodSandboxId:4ab816f190188a27157de16cac222eeb4b8f593ce3788a328d8210f552bfeb77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763990123838413617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-contro
ller-manager-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59e69a5b8762288d240e7f1a2c7dc296,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6156e797-bc05-4e34-aa66-379757a0efb0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.341044991Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d6b3fbd-e4be-42be-aaa3-2b7300fc2cc8 name=/runtime.v1.RuntimeService/Version
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.341126307Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d6b3fbd-e4be-42be-aaa3-2b7300fc2cc8 name=/runtime.v1.RuntimeService/Version
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.343246351Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be7b8117-6ee0-472b-8409-077ffaed4abb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.344770845Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763990410344744278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be7b8117-6ee0-472b-8409-077ffaed4abb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.345786357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ff5ebf2-4186-40a4-ab6e-bbf8a00cd3aa name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.345855577Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ff5ebf2-4186-40a4-ab6e-bbf8a00cd3aa name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.346237919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ab92c33447636a197aae8687e74a8f4279c5354b61d1b0d9d783752d23e3b69,PodSandboxId:6bfbf187bcd14ce5c838890d00de332d184069b89248d75ff5a36e8e372a2c36,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763990268300436911,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be5a4fcf-d0b1-4b78-b885-5735b908730d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5461cbcfdcaa07dcb29df80a96b6eb1987ec4730f75c86fd99c4f2f5b28c2f,PodSandboxId:f820956ecdaf7cb079857d152426f22274cb5539f59fab0ea80a4a02f3e75214,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763990227810637353,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de78fc6e-5604-4ab6-a3d1-77bc45527e8f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6704591932b8a605c83bc555115f59a5e8a3e2de593c46ad510b3e7f54df3c7,PodSandboxId:a811c5ebc2a75011a131dad5d274184e626816ee745a1af99f45ce18369a09fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763990216651596769,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-l44gl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9080109-fb66-42b7-aa87-20132ecddc2b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a63441c757398906d801753abc670bce8b363ad3f8b2bb736991356e1d53c72,PodSandboxId:f7e74cfb7cb36317627789b2ab56a570ac8435eec334d4c50db4b5b7e7baa476,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763990198287641909,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-v4c2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5993d23e-53b9-4c97-95e1-b1aa366e56d3,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a25a6fc92b7202964a9801c9592caed6aabb06d63a9d84a9e20311e90b8f31,PodSandboxId:546199e6ca0a8ddb69a96f65877b4d0c5e9436bf38ec5a4cb802dd9a117a0ed0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763990197701845498,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xvplk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc62b1d9-c6e4-43c7-8b5e-53aa6cbd8ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03870a0d3443cdd127f7b9c12f31ef053af491ec9d416b2c2b6c90fac1402010,PodSandboxId:8d9b5bde2d83eeab9a4271690e86e48056f39fd48556b81bd02ec709addf4a79,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763990192242420499,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-mcgnc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9ac8bcd4-5ae8-4118-abea-79210199083c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdcb36b1ae2bd6ade61ab01d7102542799004a38bfbcc8fb5f453e6c64f9e9a,PodSandboxId:ab42324dcefc11d8dfbb1fb1ed0e22c91e2bcd87ee46aa724f40a795d29614e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763990168324220589,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5185dc6e-a201-4717-9f3c-24e7ea61f5c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1dbf6d617cfc9f9f10d0958dc2f3543491feff6f98a0d7be9c5c116221edce,PodSandboxId:010673d7dc01355e433172ebc68f1f06ad77b5aa33f4934e2e65b6bc938a8cb8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763990152191247961,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkczz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dfe46b-55f9-4ba4-b9df-1600e6cc125f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fc553f7c2c997c967013cf53450ee9bb1079d0d755674582ead72403f048fd,PodSandboxId:cebcb76b140ca2179b245cbcf2319dab2c2e7afc43ef17227f6465f5ff8a1a14,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763990141889880218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf27732-0ab9-45ac-ac7e-2dfb8b9baa34,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d104badde195064278086381910b98cbe0366f2fe9a8473a623edf49e8583048,PodSandboxId:2aca29911f8e5ddd4186b6296c745b2dc91ba8d938632f59a3ce921e38634029,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763990135938032225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6zxkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c583e8-4930-4c8a-b9af-c95ff7a30529,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9a5082ca26580e2231bf47f011ad6735535d782619eb4dc124cd699d0e1f11,PodSandboxId:2e80c27fc34e78bf94e49ab16147868a1afcfdc45a1686ff2a39c3019ee90f53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763990135288587734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39664132-6873-4007-b90c-d8ee37a0ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac212d3d72b3204108fa0ce26e279e5824419c3ea7fb9ee43ca6422147a2748c,PodSandboxId:1cf8baee2f43930d12e2b82aacc2d19efc0f5d6c319aa3747f90118f8fa93d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763990123911618097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ac20284623d7f0b4623e0039b77601,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP
\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba5d243fa8c7cdcd902cef493e997e157acb1c079530a0f50d6c89c01ecc702,PodSandboxId:9eb833b40e6cb635ecf2717adf625ef485ba43311109e508e51a485e22f93303,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763990123884096663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e0d333dbb337b40761a6a8dec72db2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernete
s.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57438aa83b01c7be9011ba1ef1dcff502e16f10af50c66976ff3646e1576635e,PodSandboxId:6477feb996591e332afc2015498b35cc08f90a0e72ff59018c7e8618a849c0b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763990123864108375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a905652204d7c833d44755efe97faf,},Annot
ations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ba4c18345a2ea91e6c4cfc9f334a30119c5024ce4a46eb3a533cf630c7c900,PodSandboxId:4ab816f190188a27157de16cac222eeb4b8f593ce3788a328d8210f552bfeb77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763990123838413617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-contro
ller-manager-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59e69a5b8762288d240e7f1a2c7dc296,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ff5ebf2-4186-40a4-ab6e-bbf8a00cd3aa name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.361604099Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.376281185Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4aa929f-63a1-4a3a-92a1-e7bfdc5df2d3 name=/runtime.v1.RuntimeService/Version
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.376368648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4aa929f-63a1-4a3a-92a1-e7bfdc5df2d3 name=/runtime.v1.RuntimeService/Version
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.377624681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4ee1da16-e655-4f15-a95e-069ac13eb4cd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.378852234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763990410378828748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4ee1da16-e655-4f15-a95e-069ac13eb4cd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.380031148Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f61b4cc-0b0d-46f1-8fd5-6d4f72ba5afc name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.380086486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f61b4cc-0b0d-46f1-8fd5-6d4f72ba5afc name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 13:20:10 addons-377447 crio[804]: time="2025-11-24 13:20:10.380441843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ab92c33447636a197aae8687e74a8f4279c5354b61d1b0d9d783752d23e3b69,PodSandboxId:6bfbf187bcd14ce5c838890d00de332d184069b89248d75ff5a36e8e372a2c36,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763990268300436911,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be5a4fcf-d0b1-4b78-b885-5735b908730d,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e5461cbcfdcaa07dcb29df80a96b6eb1987ec4730f75c86fd99c4f2f5b28c2f,PodSandboxId:f820956ecdaf7cb079857d152426f22274cb5539f59fab0ea80a4a02f3e75214,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763990227810637353,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: de78fc6e-5604-4ab6-a3d1-77bc45527e8f,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6704591932b8a605c83bc555115f59a5e8a3e2de593c46ad510b3e7f54df3c7,PodSandboxId:a811c5ebc2a75011a131dad5d274184e626816ee745a1af99f45ce18369a09fd,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763990216651596769,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-l44gl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e9080109-fb66-42b7-aa87-20132ecddc2b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:9a63441c757398906d801753abc670bce8b363ad3f8b2bb736991356e1d53c72,PodSandboxId:f7e74cfb7cb36317627789b2ab56a570ac8435eec334d4c50db4b5b7e7baa476,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763990198287641909,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-v4c2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5993d23e-53b9-4c97-95e1-b1aa366e56d3,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80a25a6fc92b7202964a9801c9592caed6aabb06d63a9d84a9e20311e90b8f31,PodSandboxId:546199e6ca0a8ddb69a96f65877b4d0c5e9436bf38ec5a4cb802dd9a117a0ed0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763990197701845498,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xvplk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fc62b1d9-c6e4-43c7-8b5e-53aa6cbd8ab9,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03870a0d3443cdd127f7b9c12f31ef053af491ec9d416b2c2b6c90fac1402010,PodSandboxId:8d9b5bde2d83eeab9a4271690e86e48056f39fd48556b81bd02ec709addf4a79,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:
,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1763990192242420499,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-mcgnc,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 9ac8bcd4-5ae8-4118-abea-79210199083c,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdcb36b1ae2bd6ade61ab01d7102542799004a38bfbcc8fb5f453e6c64f9e9a,PodSandboxId:ab42324dcefc11d8dfbb1fb1ed0e22c91e2bcd87ee46aa724f40a795d29614e9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763990168324220589,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5185dc6e-a201-4717-9f3c-24e7ea61f5c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1dbf6d617cfc9f9f10d0958dc2f3543491feff6f98a0d7be9c5c116221edce,PodSandboxId:010673d7dc01355e433172ebc68f1f06ad77b5aa33f4934e2e65b6bc938a8cb8,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},
Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763990152191247961,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-pkczz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02dfe46b-55f9-4ba4-b9df-1600e6cc125f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3fc553f7c2c997c967013cf53450ee9bb1079d0d755674582ead72403f048fd,PodSandboxId:cebcb76b140ca2179b245cbcf2319dab2c2e7afc43ef17227f6465f5ff8a1a14,Metadata:&ContainerMetadata
{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763990141889880218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: faf27732-0ab9-45ac-ac7e-2dfb8b9baa34,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d104badde195064278086381910b98cbe0366f2fe9a8473a623edf49e8583048,PodSandboxId:2aca29911f8e5ddd4186b6296c745b2dc91ba8d938632f59a3ce921e38634029,Metadata:&ContainerMetadata{Name:coredn
s,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763990135938032225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6zxkz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4c583e8-4930-4c8a-b9af-c95ff7a30529,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9a5082ca26580e2231bf47f011ad6735535d782619eb4dc124cd699d0e1f11,PodSandboxId:2e80c27fc34e78bf94e49ab16147868a1afcfdc45a1686ff2a39c3019ee90f53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763990135288587734,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvcds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39664132-6873-4007-b90c-d8ee37a0ab04,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac212d3d72b3204108fa0ce26e279e5824419c3ea7fb9ee43ca6422147a2748c,PodSandboxId:1cf8baee2f43930d12e2b82aacc2d19efc0f5d6c319aa3747f90118f8fa93d78,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763990123911618097,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11ac20284623d7f0b4623e0039b77601,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP
\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba5d243fa8c7cdcd902cef493e997e157acb1c079530a0f50d6c89c01ecc702,PodSandboxId:9eb833b40e6cb635ecf2717adf625ef485ba43311109e508e51a485e22f93303,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763990123884096663,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e0d333dbb337b40761a6a8dec72db2,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernete
s.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57438aa83b01c7be9011ba1ef1dcff502e16f10af50c66976ff3646e1576635e,PodSandboxId:6477feb996591e332afc2015498b35cc08f90a0e72ff59018c7e8618a849c0b0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763990123864108375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99a905652204d7c833d44755efe97faf,},Annot
ations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ba4c18345a2ea91e6c4cfc9f334a30119c5024ce4a46eb3a533cf630c7c900,PodSandboxId:4ab816f190188a27157de16cac222eeb4b8f593ce3788a328d8210f552bfeb77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763990123838413617,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-contro
ller-manager-addons-377447,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59e69a5b8762288d240e7f1a2c7dc296,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f61b4cc-0b0d-46f1-8fd5-6d4f72ba5afc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	5ab92c3344763       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   6bfbf187bcd14       nginx                                      default
	1e5461cbcfdca       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   f820956ecdaf7       busybox                                    default
	f6704591932b8       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   a811c5ebc2a75       ingress-nginx-controller-6c8bf45fb-l44gl   ingress-nginx
	9a63441c75739       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                             3 minutes ago       Exited              patch                     1                   f7e74cfb7cb36       ingress-nginx-admission-patch-v4c2b        ingress-nginx
	80a25a6fc92b7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   546199e6ca0a8       ingress-nginx-admission-create-xvplk       ingress-nginx
	03870a0d3443c       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   8d9b5bde2d83e       local-path-provisioner-648f6765c9-mcgnc    local-path-storage
	8cdcb36b1ae2b       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   ab42324dcefc1       kube-ingress-dns-minikube                  kube-system
	da1dbf6d617cf       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   010673d7dc013       amd-gpu-device-plugin-pkczz                kube-system
	e3fc553f7c2c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   cebcb76b140ca       storage-provisioner                        kube-system
	d104badde1950       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   2aca29911f8e5       coredns-66bc5c9577-6zxkz                   kube-system
	cf9a5082ca265       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   2e80c27fc34e7       kube-proxy-bvcds                           kube-system
	ac212d3d72b32       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago       Running             kube-apiserver            0                   1cf8baee2f439       kube-apiserver-addons-377447               kube-system
	6ba5d243fa8c7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago       Running             kube-scheduler            0                   9eb833b40e6cb       kube-scheduler-addons-377447               kube-system
	57438aa83b01c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago       Running             etcd                      0                   6477feb996591       etcd-addons-377447                         kube-system
	32ba4c18345a2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago       Running             kube-controller-manager   0                   4ab816f190188       kube-controller-manager-addons-377447      kube-system
	
	
	==> coredns [d104badde195064278086381910b98cbe0366f2fe9a8473a623edf49e8583048] <==
	[INFO] 10.244.0.8:53011 - 38910 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00039825s
	[INFO] 10.244.0.8:53011 - 15178 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000088186s
	[INFO] 10.244.0.8:53011 - 59121 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000101874s
	[INFO] 10.244.0.8:53011 - 23192 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000099733s
	[INFO] 10.244.0.8:53011 - 46156 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000236705s
	[INFO] 10.244.0.8:53011 - 62913 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000136064s
	[INFO] 10.244.0.8:53011 - 19927 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000183431s
	[INFO] 10.244.0.8:45241 - 63035 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00011251s
	[INFO] 10.244.0.8:45241 - 63320 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000117058s
	[INFO] 10.244.0.8:50448 - 3073 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010117s
	[INFO] 10.244.0.8:50448 - 2792 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000139114s
	[INFO] 10.244.0.8:37300 - 31498 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079239s
	[INFO] 10.244.0.8:37300 - 31258 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000132409s
	[INFO] 10.244.0.8:47287 - 61318 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000098205s
	[INFO] 10.244.0.8:47287 - 61476 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000213066s
	[INFO] 10.244.0.23:47712 - 24216 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000814936s
	[INFO] 10.244.0.23:38215 - 12731 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194043s
	[INFO] 10.244.0.23:58529 - 38548 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000312715s
	[INFO] 10.244.0.23:37429 - 57152 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.001069316s
	[INFO] 10.244.0.23:38761 - 42856 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000431423s
	[INFO] 10.244.0.23:45227 - 40084 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123029s
	[INFO] 10.244.0.23:40258 - 3155 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001325362s
	[INFO] 10.244.0.23:55069 - 46944 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003512215s
	[INFO] 10.244.0.28:43267 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000295594s
	[INFO] 10.244.0.28:59487 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160387s
	
	
	==> describe nodes <==
	Name:               addons-377447
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-377447
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=addons-377447
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_15_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-377447
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:15:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-377447
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:20:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:18:32 +0000   Mon, 24 Nov 2025 13:15:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:18:32 +0000   Mon, 24 Nov 2025 13:15:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:18:32 +0000   Mon, 24 Nov 2025 13:15:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:18:32 +0000   Mon, 24 Nov 2025 13:15:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    addons-377447
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e0710977f0fa49ba9ad45fe1cc92849c
	  System UUID:                e0710977-f0fa-49ba-9ad4-5fe1cc92849c
	  Boot ID:                    5e5a6d77-c39b-4cee-b050-f9936d420f5c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     hello-world-app-5d498dc89-s7lts             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-l44gl    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-pkczz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 coredns-66bc5c9577-6zxkz                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m36s
	  kube-system                 etcd-addons-377447                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m41s
	  kube-system                 kube-apiserver-addons-377447                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-377447       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-proxy-bvcds                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-scheduler-addons-377447                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	  local-path-storage          local-path-provisioner-648f6765c9-mcgnc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m34s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m48s)  kubelet          Node addons-377447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m48s)  kubelet          Node addons-377447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m48s)  kubelet          Node addons-377447 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m41s                  kubelet          Node addons-377447 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s                  kubelet          Node addons-377447 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s                  kubelet          Node addons-377447 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s                  kubelet          Node addons-377447 status is now: NodeReady
	  Normal  RegisteredNode           4m37s                  node-controller  Node addons-377447 event: Registered Node addons-377447 in Controller
	
	
	==> dmesg <==
	[  +5.159349] kauditd_printk_skb: 248 callbacks suppressed
	[  +6.595388] kauditd_printk_skb: 5 callbacks suppressed
	[Nov24 13:16] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.125088] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.304807] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.620939] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.022158] kauditd_printk_skb: 101 callbacks suppressed
	[  +0.169535] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.943728] kauditd_printk_skb: 138 callbacks suppressed
	[  +0.000112] kauditd_printk_skb: 78 callbacks suppressed
	[  +6.726505] kauditd_printk_skb: 26 callbacks suppressed
	[Nov24 13:17] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.970443] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.828318] kauditd_printk_skb: 89 callbacks suppressed
	[  +3.932219] kauditd_printk_skb: 66 callbacks suppressed
	[  +0.694985] kauditd_printk_skb: 106 callbacks suppressed
	[  +0.784576] kauditd_printk_skb: 231 callbacks suppressed
	[  +5.429862] kauditd_printk_skb: 21 callbacks suppressed
	[Nov24 13:18] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.901639] kauditd_printk_skb: 26 callbacks suppressed
	[  +8.033406] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000057] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.846701] kauditd_printk_skb: 41 callbacks suppressed
	[Nov24 13:20] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [57438aa83b01c7be9011ba1ef1dcff502e16f10af50c66976ff3646e1576635e] <==
	{"level":"warn","ts":"2025-11-24T13:16:08.216466Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T13:16:07.911884Z","time spent":"304.573398ms","remote":"127.0.0.1:55886","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-11-24T13:16:08.216818Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"491.017578ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:16:08.216864Z","caller":"traceutil/trace.go:172","msg":"trace[1307950721] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:955; }","duration":"491.064489ms","start":"2025-11-24T13:16:07.725791Z","end":"2025-11-24T13:16:08.216856Z","steps":["trace[1307950721] 'agreement among raft nodes before linearized reading'  (duration: 404.947695ms)","trace[1307950721] 'range keys from in-memory index tree'  (duration: 86.060494ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:16:08.216908Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T13:16:07.725778Z","time spent":"491.123124ms","remote":"127.0.0.1:55886","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-11-24T13:16:08.217174Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.572428ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:16:08.217215Z","caller":"traceutil/trace.go:172","msg":"trace[1715184199] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:956; }","duration":"270.61544ms","start":"2025-11-24T13:16:07.946594Z","end":"2025-11-24T13:16:08.217210Z","steps":["trace[1715184199] 'agreement among raft nodes before linearized reading'  (duration: 270.560208ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:16:08.217377Z","caller":"traceutil/trace.go:172","msg":"trace[26920744] transaction","detail":"{read_only:false; response_revision:956; number_of_response:1; }","duration":"362.188749ms","start":"2025-11-24T13:16:07.855181Z","end":"2025-11-24T13:16:08.217369Z","steps":["trace[26920744] 'process raft request'  (duration: 275.580602ms)","trace[26920744] 'compare'  (duration: 86.328072ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:16:08.217453Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-24T13:16:07.855163Z","time spent":"362.253732ms","remote":"127.0.0.1:56018","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-doqflynvaakkaqpcl4eggb57tu\" mod_revision:934 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-doqflynvaakkaqpcl4eggb57tu\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-doqflynvaakkaqpcl4eggb57tu\" > >"}
	{"level":"warn","ts":"2025-11-24T13:16:08.217556Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"232.045082ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:16:08.217594Z","caller":"traceutil/trace.go:172","msg":"trace[308589915] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:956; }","duration":"232.082997ms","start":"2025-11-24T13:16:07.985506Z","end":"2025-11-24T13:16:08.217589Z","steps":["trace[308589915] 'agreement among raft nodes before linearized reading'  (duration: 232.015534ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:16:22.231894Z","caller":"traceutil/trace.go:172","msg":"trace[1475109384] transaction","detail":"{read_only:false; response_revision:1010; number_of_response:1; }","duration":"187.251672ms","start":"2025-11-24T13:16:22.044628Z","end":"2025-11-24T13:16:22.231880Z","steps":["trace[1475109384] 'process raft request'  (duration: 187.111821ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:16:36.315265Z","caller":"traceutil/trace.go:172","msg":"trace[1458642835] transaction","detail":"{read_only:false; response_revision:1057; number_of_response:1; }","duration":"199.722871ms","start":"2025-11-24T13:16:36.115531Z","end":"2025-11-24T13:16:36.315254Z","steps":["trace[1458642835] 'process raft request'  (duration: 199.552946ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:16:43.658203Z","caller":"traceutil/trace.go:172","msg":"trace[1618431110] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"186.103013ms","start":"2025-11-24T13:16:43.472084Z","end":"2025-11-24T13:16:43.658187Z","steps":["trace[1618431110] 'process raft request'  (duration: 185.949162ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:16:50.670612Z","caller":"traceutil/trace.go:172","msg":"trace[1368367370] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"197.512077ms","start":"2025-11-24T13:16:50.473085Z","end":"2025-11-24T13:16:50.670597Z","steps":["trace[1368367370] 'process raft request'  (duration: 197.37742ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:16:54.534005Z","caller":"traceutil/trace.go:172","msg":"trace[469253358] transaction","detail":"{read_only:false; response_revision:1174; number_of_response:1; }","duration":"178.168263ms","start":"2025-11-24T13:16:54.355489Z","end":"2025-11-24T13:16:54.533657Z","steps":["trace[469253358] 'process raft request'  (duration: 176.897405ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:17:22.106842Z","caller":"traceutil/trace.go:172","msg":"trace[341656084] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"109.549901ms","start":"2025-11-24T13:17:21.997280Z","end":"2025-11-24T13:17:22.106830Z","steps":["trace[341656084] 'process raft request'  (duration: 109.469406ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:17:27.051649Z","caller":"traceutil/trace.go:172","msg":"trace[453647314] transaction","detail":"{read_only:false; response_revision:1369; number_of_response:1; }","duration":"122.645467ms","start":"2025-11-24T13:17:26.928851Z","end":"2025-11-24T13:17:27.051497Z","steps":["trace[453647314] 'process raft request'  (duration: 121.941162ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:17:35.222102Z","caller":"traceutil/trace.go:172","msg":"trace[1677645689] transaction","detail":"{read_only:false; response_revision:1410; number_of_response:1; }","duration":"111.806015ms","start":"2025-11-24T13:17:35.110284Z","end":"2025-11-24T13:17:35.222090Z","steps":["trace[1677645689] 'process raft request'  (duration: 111.723002ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:17:35.903091Z","caller":"traceutil/trace.go:172","msg":"trace[877106203] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1413; }","duration":"113.034214ms","start":"2025-11-24T13:17:35.790045Z","end":"2025-11-24T13:17:35.903079Z","steps":["trace[877106203] 'process raft request'  (duration: 112.869772ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:17:41.550163Z","caller":"traceutil/trace.go:172","msg":"trace[1511504044] linearizableReadLoop","detail":"{readStateIndex:1498; appliedIndex:1498; }","duration":"287.565889ms","start":"2025-11-24T13:17:41.262581Z","end":"2025-11-24T13:17:41.550147Z","steps":["trace[1511504044] 'read index received'  (duration: 287.559907ms)","trace[1511504044] 'applied index is now lower than readState.Index'  (duration: 5.233µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:17:41.550292Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"287.704468ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:17:41.550339Z","caller":"traceutil/trace.go:172","msg":"trace[1660798906] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1449; }","duration":"287.76924ms","start":"2025-11-24T13:17:41.262559Z","end":"2025-11-24T13:17:41.550328Z","steps":["trace[1660798906] 'agreement among raft nodes before linearized reading'  (duration: 287.682691ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:17:41.550848Z","caller":"traceutil/trace.go:172","msg":"trace[1614161312] transaction","detail":"{read_only:false; response_revision:1450; number_of_response:1; }","duration":"299.390536ms","start":"2025-11-24T13:17:41.251446Z","end":"2025-11-24T13:17:41.550836Z","steps":["trace[1614161312] 'process raft request'  (duration: 299.074387ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:17:41.551135Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"219.092945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:17:41.551212Z","caller":"traceutil/trace.go:172","msg":"trace[1473581965] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1450; }","duration":"219.182793ms","start":"2025-11-24T13:17:41.332021Z","end":"2025-11-24T13:17:41.551204Z","steps":["trace[1473581965] 'agreement among raft nodes before linearized reading'  (duration: 218.635852ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:20:10 up 5 min,  0 users,  load average: 0.89, 1.53, 0.79
	Linux addons-377447 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ac212d3d72b3204108fa0ce26e279e5824419c3ea7fb9ee43ca6422147a2748c] <==
	W1124 13:16:24.871725       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 13:16:24.871772       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1124 13:16:24.901980       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 13:17:14.888894       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:34470: use of closed network connection
	E1124 13:17:15.080515       1 conn.go:339] Error on socket receive: read tcp 192.168.39.2:8443->192.168.39.1:34504: use of closed network connection
	I1124 13:17:24.253874       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.28.22"}
	I1124 13:17:43.629384       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1124 13:17:43.820357       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.255.85"}
	I1124 13:18:20.125776       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1124 13:18:25.888460       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1124 13:18:49.166568       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 13:18:49.168025       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 13:18:49.202453       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 13:18:49.204003       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 13:18:49.229216       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 13:18:49.229314       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1124 13:18:49.260852       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1124 13:18:49.260996       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1124 13:18:50.204145       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1124 13:18:50.261092       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1124 13:18:50.288168       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1124 13:20:09.358852       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.240.53"}
	
	
	==> kube-controller-manager [32ba4c18345a2ea91e6c4cfc9f334a30119c5024ce4a46eb3a533cf630c7c900] <==
	E1124 13:18:54.501297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 13:18:57.765043       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:18:57.766670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 13:18:58.954323       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:18:58.955303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 13:18:59.511868       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:18:59.513060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1124 13:19:03.632192       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 13:19:03.632236       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:19:03.659588       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 13:19:03.659639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 13:19:09.059722       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:19:09.060860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 13:19:09.581721       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:19:09.582633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 13:19:10.606123       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:19:10.607161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 13:19:30.618576       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:19:30.619626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 13:19:31.852774       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:19:31.853647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 13:19:31.872477       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:19:31.873441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1124 13:20:10.464256       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1124 13:20:10.465400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [cf9a5082ca26580e2231bf47f011ad6735535d782619eb4dc124cd699d0e1f11] <==
	I1124 13:15:35.943124       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:15:36.045608       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:15:36.045649       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.2"]
	E1124 13:15:36.045706       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:15:36.319494       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1124 13:15:36.319567       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 13:15:36.319594       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:15:36.367779       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:15:36.369835       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:15:36.369887       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:15:36.389110       1 config.go:309] "Starting node config controller"
	I1124 13:15:36.389137       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:15:36.389144       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:15:36.389424       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:15:36.389431       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:15:36.389489       1 config.go:200] "Starting service config controller"
	I1124 13:15:36.389493       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:15:36.389503       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:15:36.389507       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:15:36.490409       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:15:36.494243       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:15:36.494307       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [6ba5d243fa8c7cdcd902cef493e997e157acb1c079530a0f50d6c89c01ecc702] <==
	E1124 13:15:26.544887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:15:26.545036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:15:26.545081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:15:26.545203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:15:26.545285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:15:26.545394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:15:26.545454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:15:26.545497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:15:26.545534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:15:26.546394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:15:26.547420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:15:27.401264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:15:27.416274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:15:27.444484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:15:27.446850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:15:27.486647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:15:27.496677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:15:27.545218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:15:27.556408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:15:27.714146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:15:27.745655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:15:27.767731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:15:27.830900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:15:27.877820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 13:15:30.424984       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:18:52 addons-377447 kubelet[1479]: I1124 13:18:52.476017    1479 scope.go:117] "RemoveContainer" containerID="549ef63d106fde8f98b64a1963d1e7deddb4a00803255b29690d0995899ff2b5"
	Nov 24 13:18:52 addons-377447 kubelet[1479]: E1124 13:18:52.476667    1479 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"549ef63d106fde8f98b64a1963d1e7deddb4a00803255b29690d0995899ff2b5\": container with ID starting with 549ef63d106fde8f98b64a1963d1e7deddb4a00803255b29690d0995899ff2b5 not found: ID does not exist" containerID="549ef63d106fde8f98b64a1963d1e7deddb4a00803255b29690d0995899ff2b5"
	Nov 24 13:18:52 addons-377447 kubelet[1479]: I1124 13:18:52.476716    1479 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"549ef63d106fde8f98b64a1963d1e7deddb4a00803255b29690d0995899ff2b5"} err="failed to get container status \"549ef63d106fde8f98b64a1963d1e7deddb4a00803255b29690d0995899ff2b5\": rpc error: code = NotFound desc = could not find container \"549ef63d106fde8f98b64a1963d1e7deddb4a00803255b29690d0995899ff2b5\": container with ID starting with 549ef63d106fde8f98b64a1963d1e7deddb4a00803255b29690d0995899ff2b5 not found: ID does not exist"
	Nov 24 13:18:53 addons-377447 kubelet[1479]: I1124 13:18:53.325408    1479 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0127a3a4-fe3a-46e5-9886-e1875cca33bb" path="/var/lib/kubelet/pods/0127a3a4-fe3a-46e5-9886-e1875cca33bb/volumes"
	Nov 24 13:18:53 addons-377447 kubelet[1479]: I1124 13:18:53.326176    1479 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2424c6b1-0008-46c5-b244-0c8caae48f89" path="/var/lib/kubelet/pods/2424c6b1-0008-46c5-b244-0c8caae48f89/volumes"
	Nov 24 13:18:53 addons-377447 kubelet[1479]: I1124 13:18:53.326588    1479 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d92fd1a-a913-4939-a41b-340ad3c40ce2" path="/var/lib/kubelet/pods/7d92fd1a-a913-4939-a41b-340ad3c40ce2/volumes"
	Nov 24 13:18:59 addons-377447 kubelet[1479]: E1124 13:18:59.678834    1479 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763990339678550159  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:18:59 addons-377447 kubelet[1479]: E1124 13:18:59.678877    1479 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763990339678550159  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:09 addons-377447 kubelet[1479]: E1124 13:19:09.681760    1479 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763990349681496093  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:09 addons-377447 kubelet[1479]: E1124 13:19:09.681818    1479 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763990349681496093  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:19 addons-377447 kubelet[1479]: E1124 13:19:19.685985    1479 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763990359685515301  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:19 addons-377447 kubelet[1479]: E1124 13:19:19.686028    1479 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763990359685515301  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:29 addons-377447 kubelet[1479]: E1124 13:19:29.689415    1479 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763990369688877492  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:29 addons-377447 kubelet[1479]: E1124 13:19:29.689441    1479 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763990369688877492  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:39 addons-377447 kubelet[1479]: E1124 13:19:39.692310    1479 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763990379691855365  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:39 addons-377447 kubelet[1479]: E1124 13:19:39.692336    1479 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763990379691855365  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:45 addons-377447 kubelet[1479]: I1124 13:19:45.326259    1479 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-pkczz" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:19:49 addons-377447 kubelet[1479]: I1124 13:19:49.322692    1479 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:19:49 addons-377447 kubelet[1479]: E1124 13:19:49.694841    1479 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763990389694422230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:49 addons-377447 kubelet[1479]: E1124 13:19:49.694866    1479 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763990389694422230  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:59 addons-377447 kubelet[1479]: E1124 13:19:59.697101    1479 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763990399696747379  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:19:59 addons-377447 kubelet[1479]: E1124 13:19:59.697162    1479 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763990399696747379  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:20:09 addons-377447 kubelet[1479]: I1124 13:20:09.333802    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drczf\" (UniqueName: \"kubernetes.io/projected/c4db4f77-4b02-4690-915a-595675c20146-kube-api-access-drczf\") pod \"hello-world-app-5d498dc89-s7lts\" (UID: \"c4db4f77-4b02-4690-915a-595675c20146\") " pod="default/hello-world-app-5d498dc89-s7lts"
	Nov 24 13:20:09 addons-377447 kubelet[1479]: E1124 13:20:09.699350    1479 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763990409699004726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 24 13:20:09 addons-377447 kubelet[1479]: E1124 13:20:09.699372    1479 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763990409699004726  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	
	
	==> storage-provisioner [e3fc553f7c2c997c967013cf53450ee9bb1079d0d755674582ead72403f048fd] <==
	W1124 13:19:46.218419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:48.222180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:48.226716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:50.229626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:50.235314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:52.238317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:52.244761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:54.248575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:54.253323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:56.257611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:56.265910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:58.275265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:19:58.279382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:00.283100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:00.289628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:02.292431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:02.297090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:04.300359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:04.307188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:06.310150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:06.315351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:08.319500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:08.328509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:10.333273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:10.338695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-377447 -n addons-377447
helpers_test.go:269: (dbg) Run:  kubectl --context addons-377447 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-s7lts ingress-nginx-admission-create-xvplk ingress-nginx-admission-patch-v4c2b
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-377447 describe pod hello-world-app-5d498dc89-s7lts ingress-nginx-admission-create-xvplk ingress-nginx-admission-patch-v4c2b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-377447 describe pod hello-world-app-5d498dc89-s7lts ingress-nginx-admission-create-xvplk ingress-nginx-admission-patch-v4c2b: exit status 1 (65.632754ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-s7lts
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-377447/192.168.39.2
	Start Time:       Mon, 24 Nov 2025 13:20:09 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-drczf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-drczf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-s7lts to addons-377447
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xvplk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v4c2b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-377447 describe pod hello-world-app-5d498dc89-s7lts ingress-nginx-admission-create-xvplk ingress-nginx-admission-patch-v4c2b: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-377447 addons disable ingress-dns --alsologtostderr -v=1: (1.749211074s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-377447 addons disable ingress --alsologtostderr -v=1: (7.656334265s)
--- FAIL: TestAddons/parallel/Ingress (157.41s)

                                                
                                    
x
+
TestPreload (162.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-684261 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1124 14:03:59.444074  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-684261 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m34.511927123s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-684261 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-684261 image pull gcr.io/k8s-minikube/busybox: (3.704836258s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-684261
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-684261: (6.790435589s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-684261 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1124 14:05:56.377339  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-684261 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (54.719071871s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-684261 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-11-24 14:06:27.34278003 +0000 UTC m=+3131.584885829
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-684261 -n test-preload-684261
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-684261 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-037620 ssh -n multinode-037620-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:52 UTC │
	│ ssh     │ multinode-037620 ssh -n multinode-037620 sudo cat /home/docker/cp-test_multinode-037620-m03_multinode-037620.txt                                          │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:52 UTC │
	│ cp      │ multinode-037620 cp multinode-037620-m03:/home/docker/cp-test.txt multinode-037620-m02:/home/docker/cp-test_multinode-037620-m03_multinode-037620-m02.txt │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:52 UTC │
	│ ssh     │ multinode-037620 ssh -n multinode-037620-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:52 UTC │
	│ ssh     │ multinode-037620 ssh -n multinode-037620-m02 sudo cat /home/docker/cp-test_multinode-037620-m03_multinode-037620-m02.txt                                  │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:52 UTC │
	│ node    │ multinode-037620 node stop m03                                                                                                                            │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:53 UTC │
	│ node    │ multinode-037620 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:53 UTC │
	│ node    │ list -p multinode-037620                                                                                                                                  │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │                     │
	│ stop    │ -p multinode-037620                                                                                                                                       │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:56 UTC │
	│ start   │ -p multinode-037620 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:56 UTC │ 24 Nov 25 13:58 UTC │
	│ node    │ list -p multinode-037620                                                                                                                                  │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ node    │ multinode-037620 node delete m03                                                                                                                          │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ stop    │ multinode-037620 stop                                                                                                                                     │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p multinode-037620 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:03 UTC │
	│ node    │ list -p multinode-037620                                                                                                                                  │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │                     │
	│ start   │ -p multinode-037620-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-037620-m02 │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │                     │
	│ start   │ -p multinode-037620-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-037620-m03 │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │ 24 Nov 25 14:03 UTC │
	│ node    │ add -p multinode-037620                                                                                                                                   │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │                     │
	│ delete  │ -p multinode-037620-m03                                                                                                                                   │ multinode-037620-m03 │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │ 24 Nov 25 14:03 UTC │
	│ delete  │ -p multinode-037620                                                                                                                                       │ multinode-037620     │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │ 24 Nov 25 14:03 UTC │
	│ start   │ -p test-preload-684261 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-684261  │ jenkins │ v1.37.0 │ 24 Nov 25 14:03 UTC │ 24 Nov 25 14:05 UTC │
	│ image   │ test-preload-684261 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-684261  │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ stop    │ -p test-preload-684261                                                                                                                                    │ test-preload-684261  │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:05 UTC │
	│ start   │ -p test-preload-684261 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-684261  │ jenkins │ v1.37.0 │ 24 Nov 25 14:05 UTC │ 24 Nov 25 14:06 UTC │
	│ image   │ test-preload-684261 image list                                                                                                                            │ test-preload-684261  │ jenkins │ v1.37.0 │ 24 Nov 25 14:06 UTC │ 24 Nov 25 14:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:05:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:05:32.493590  159354 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:05:32.493736  159354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:05:32.493749  159354 out.go:374] Setting ErrFile to fd 2...
	I1124 14:05:32.493755  159354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:05:32.494006  159354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 14:05:32.494501  159354 out.go:368] Setting JSON to false
	I1124 14:05:32.495400  159354 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6455,"bootTime":1763986677,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:05:32.495459  159354 start.go:143] virtualization: kvm guest
	I1124 14:05:32.497257  159354 out.go:179] * [test-preload-684261] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:05:32.498626  159354 notify.go:221] Checking for updates...
	I1124 14:05:32.498647  159354 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:05:32.499689  159354 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:05:32.500686  159354 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 14:05:32.501701  159354 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	I1124 14:05:32.502645  159354 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:05:32.503573  159354 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:05:32.504888  159354 config.go:182] Loaded profile config "test-preload-684261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1124 14:05:32.506382  159354 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1124 14:05:32.507316  159354 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:05:32.543012  159354 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 14:05:32.543944  159354 start.go:309] selected driver: kvm2
	I1124 14:05:32.543958  159354 start.go:927] validating driver "kvm2" against &{Name:test-preload-684261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-684261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:05:32.544103  159354 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:05:32.545083  159354 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:05:32.545141  159354 cni.go:84] Creating CNI manager for ""
	I1124 14:05:32.545214  159354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 14:05:32.545295  159354 start.go:353] cluster config:
	{Name:test-preload-684261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-684261 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:05:32.545416  159354 iso.go:125] acquiring lock: {Name:mk70c2563fd35b13c556749f7252ab4e6e575da1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:05:32.547026  159354 out.go:179] * Starting "test-preload-684261" primary control-plane node in "test-preload-684261" cluster
	I1124 14:05:32.548210  159354 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1124 14:05:32.731546  159354 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1124 14:05:32.731583  159354 cache.go:65] Caching tarball of preloaded images
	I1124 14:05:32.731770  159354 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1124 14:05:32.733444  159354 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1124 14:05:32.734575  159354 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1124 14:05:32.845157  159354 preload.go:295] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1124 14:05:32.845218  159354 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1124 14:05:43.650926  159354 cache.go:68] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1124 14:05:43.651140  159354 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/config.json ...
	I1124 14:05:43.651391  159354 start.go:360] acquireMachinesLock for test-preload-684261: {Name:mk9fe90a150b6a232eb17397ca6aca3c1b63dcde Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1124 14:05:43.651466  159354 start.go:364] duration metric: took 50.118µs to acquireMachinesLock for "test-preload-684261"
	I1124 14:05:43.651490  159354 start.go:96] Skipping create...Using existing machine configuration
	I1124 14:05:43.651497  159354 fix.go:54] fixHost starting: 
	I1124 14:05:43.653431  159354 fix.go:112] recreateIfNeeded on test-preload-684261: state=Stopped err=<nil>
	W1124 14:05:43.653459  159354 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 14:05:43.655361  159354 out.go:252] * Restarting existing kvm2 VM for "test-preload-684261" ...
	I1124 14:05:43.655401  159354 main.go:143] libmachine: starting domain...
	I1124 14:05:43.655413  159354 main.go:143] libmachine: ensuring networks are active...
	I1124 14:05:43.656205  159354 main.go:143] libmachine: Ensuring network default is active
	I1124 14:05:43.656568  159354 main.go:143] libmachine: Ensuring network mk-test-preload-684261 is active
	I1124 14:05:43.656948  159354 main.go:143] libmachine: getting domain XML...
	I1124 14:05:43.657954  159354 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-684261</name>
	  <uuid>f23877d2-332a-48f0-8a40-5b235ce60417</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21932-132228/.minikube/machines/test-preload-684261/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21932-132228/.minikube/machines/test-preload-684261/test-preload-684261.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:95:7a:af'/>
	      <source network='mk-test-preload-684261'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7d:9d:3c'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1124 14:05:44.923903  159354 main.go:143] libmachine: waiting for domain to start...
	I1124 14:05:44.925189  159354 main.go:143] libmachine: domain is now running
	I1124 14:05:44.925206  159354 main.go:143] libmachine: waiting for IP...
	I1124 14:05:44.925950  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:44.926466  159354 main.go:143] libmachine: domain test-preload-684261 has current primary IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:44.926480  159354 main.go:143] libmachine: found domain IP: 192.168.39.58
	I1124 14:05:44.926485  159354 main.go:143] libmachine: reserving static IP address...
	I1124 14:05:44.926884  159354 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-684261", mac: "52:54:00:95:7a:af", ip: "192.168.39.58"} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:04:02 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:44.926906  159354 main.go:143] libmachine: skip adding static IP to network mk-test-preload-684261 - found existing host DHCP lease matching {name: "test-preload-684261", mac: "52:54:00:95:7a:af", ip: "192.168.39.58"}
	I1124 14:05:44.926913  159354 main.go:143] libmachine: reserved static IP address 192.168.39.58 for domain test-preload-684261
	I1124 14:05:44.926918  159354 main.go:143] libmachine: waiting for SSH...
	I1124 14:05:44.926938  159354 main.go:143] libmachine: Getting to WaitForSSH function...
	I1124 14:05:44.929018  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:44.929381  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:04:02 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:44.929406  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:44.929555  159354 main.go:143] libmachine: Using SSH client type: native
	I1124 14:05:44.929770  159354 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1124 14:05:44.929779  159354 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1124 14:05:47.988365  159354 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.58:22: connect: no route to host
	I1124 14:05:54.068348  159354 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.58:22: connect: no route to host
	I1124 14:05:57.179721  159354 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:05:57.183472  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.183984  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:57.184016  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.184337  159354 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/config.json ...
	I1124 14:05:57.184604  159354 machine.go:94] provisionDockerMachine start ...
	I1124 14:05:57.186741  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.187065  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:57.187121  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.187303  159354 main.go:143] libmachine: Using SSH client type: native
	I1124 14:05:57.187572  159354 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1124 14:05:57.187584  159354 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:05:57.287803  159354 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1124 14:05:57.287851  159354 buildroot.go:166] provisioning hostname "test-preload-684261"
	I1124 14:05:57.291103  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.291562  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:57.291592  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.291782  159354 main.go:143] libmachine: Using SSH client type: native
	I1124 14:05:57.292029  159354 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1124 14:05:57.292043  159354 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-684261 && echo "test-preload-684261" | sudo tee /etc/hostname
	I1124 14:05:57.407849  159354 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-684261
	
	I1124 14:05:57.410771  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.411177  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:57.411205  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.411394  159354 main.go:143] libmachine: Using SSH client type: native
	I1124 14:05:57.411666  159354 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1124 14:05:57.411692  159354 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-684261' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-684261/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-684261' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:05:57.519595  159354 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:05:57.519638  159354 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21932-132228/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-132228/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-132228/.minikube}
	I1124 14:05:57.519689  159354 buildroot.go:174] setting up certificates
	I1124 14:05:57.519703  159354 provision.go:84] configureAuth start
	I1124 14:05:57.522687  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.523030  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:57.523063  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.525225  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.525524  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:57.525547  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.525657  159354 provision.go:143] copyHostCerts
	I1124 14:05:57.525716  159354 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-132228/.minikube/key.pem, removing ...
	I1124 14:05:57.525740  159354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-132228/.minikube/key.pem
	I1124 14:05:57.525854  159354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-132228/.minikube/key.pem (1675 bytes)
	I1124 14:05:57.526049  159354 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-132228/.minikube/ca.pem, removing ...
	I1124 14:05:57.526063  159354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-132228/.minikube/ca.pem
	I1124 14:05:57.526126  159354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-132228/.minikube/ca.pem (1078 bytes)
	I1124 14:05:57.526217  159354 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-132228/.minikube/cert.pem, removing ...
	I1124 14:05:57.526228  159354 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-132228/.minikube/cert.pem
	I1124 14:05:57.526270  159354 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-132228/.minikube/cert.pem (1123 bytes)
	I1124 14:05:57.526344  159354 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-132228/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca-key.pem org=jenkins.test-preload-684261 san=[127.0.0.1 192.168.39.58 localhost minikube test-preload-684261]
	I1124 14:05:57.576581  159354 provision.go:177] copyRemoteCerts
	I1124 14:05:57.576648  159354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:05:57.579026  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.579377  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:57.579402  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.579545  159354 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/test-preload-684261/id_rsa Username:docker}
	I1124 14:05:57.660789  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:05:57.691397  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 14:05:57.721736  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:05:57.751613  159354 provision.go:87] duration metric: took 231.882876ms to configureAuth
	I1124 14:05:57.751647  159354 buildroot.go:189] setting minikube options for container-runtime
	I1124 14:05:57.751868  159354 config.go:182] Loaded profile config "test-preload-684261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1124 14:05:57.754463  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.754906  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:57.754930  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.755093  159354 main.go:143] libmachine: Using SSH client type: native
	I1124 14:05:57.755320  159354 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1124 14:05:57.755336  159354 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:05:57.985924  159354 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:05:57.985954  159354 machine.go:97] duration metric: took 801.333127ms to provisionDockerMachine
	I1124 14:05:57.985969  159354 start.go:293] postStartSetup for "test-preload-684261" (driver="kvm2")
	I1124 14:05:57.985982  159354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:05:57.986052  159354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:05:57.989016  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.989472  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:57.989503  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:57.989665  159354 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/test-preload-684261/id_rsa Username:docker}
	I1124 14:05:58.071656  159354 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:05:58.076406  159354 info.go:137] Remote host: Buildroot 2025.02
	I1124 14:05:58.076435  159354 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-132228/.minikube/addons for local assets ...
	I1124 14:05:58.076503  159354 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-132228/.minikube/files for local assets ...
	I1124 14:05:58.076580  159354 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-132228/.minikube/files/etc/ssl/certs/1362682.pem -> 1362682.pem in /etc/ssl/certs
	I1124 14:05:58.076668  159354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:05:58.088061  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/files/etc/ssl/certs/1362682.pem --> /etc/ssl/certs/1362682.pem (1708 bytes)
	I1124 14:05:58.116904  159354 start.go:296] duration metric: took 130.918048ms for postStartSetup
	I1124 14:05:58.116956  159354 fix.go:56] duration metric: took 14.46545781s for fixHost
	I1124 14:05:58.119821  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:58.120197  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:58.120228  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:58.120413  159354 main.go:143] libmachine: Using SSH client type: native
	I1124 14:05:58.120669  159354 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1124 14:05:58.120681  159354 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1124 14:05:58.219604  159354 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763993158.178522721
	
	I1124 14:05:58.219632  159354 fix.go:216] guest clock: 1763993158.178522721
	I1124 14:05:58.219640  159354 fix.go:229] Guest: 2025-11-24 14:05:58.178522721 +0000 UTC Remote: 2025-11-24 14:05:58.116962885 +0000 UTC m=+25.674412664 (delta=61.559836ms)
	I1124 14:05:58.219658  159354 fix.go:200] guest clock delta is within tolerance: 61.559836ms
	I1124 14:05:58.219663  159354 start.go:83] releasing machines lock for "test-preload-684261", held for 14.568187642s
	I1124 14:05:58.222622  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:58.223035  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:58.223067  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:58.223652  159354 ssh_runner.go:195] Run: cat /version.json
	I1124 14:05:58.223709  159354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:05:58.226603  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:58.226776  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:58.227060  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:58.227087  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:58.227169  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:58.227203  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:58.227267  159354 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/test-preload-684261/id_rsa Username:docker}
	I1124 14:05:58.227476  159354 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/test-preload-684261/id_rsa Username:docker}
	I1124 14:05:58.301720  159354 ssh_runner.go:195] Run: systemctl --version
	I1124 14:05:58.327463  159354 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:05:58.467655  159354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:05:58.474413  159354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:05:58.474484  159354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:05:58.492949  159354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 14:05:58.492982  159354 start.go:496] detecting cgroup driver to use...
	I1124 14:05:58.493058  159354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:05:58.514221  159354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:05:58.533504  159354 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:05:58.533580  159354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:05:58.550248  159354 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:05:58.565594  159354 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:05:58.707507  159354 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:05:58.914345  159354 docker.go:234] disabling docker service ...
	I1124 14:05:58.914439  159354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:05:58.930504  159354 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:05:58.944852  159354 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:05:59.094594  159354 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:05:59.234998  159354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:05:59.249769  159354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:05:59.270389  159354 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1124 14:05:59.270472  159354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:05:59.282068  159354 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1124 14:05:59.282178  159354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:05:59.294090  159354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:05:59.305428  159354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:05:59.316761  159354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:05:59.328696  159354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:05:59.340067  159354 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:05:59.358818  159354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:05:59.370414  159354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:05:59.380181  159354 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1124 14:05:59.380259  159354 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1124 14:05:59.400036  159354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:05:59.410641  159354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:05:59.549439  159354 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:05:59.661689  159354 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:05:59.661795  159354 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:05:59.667058  159354 start.go:564] Will wait 60s for crictl version
	I1124 14:05:59.667141  159354 ssh_runner.go:195] Run: which crictl
	I1124 14:05:59.671090  159354 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1124 14:05:59.704042  159354 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1124 14:05:59.704176  159354 ssh_runner.go:195] Run: crio --version
	I1124 14:05:59.732162  159354 ssh_runner.go:195] Run: crio --version
	I1124 14:05:59.761901  159354 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1124 14:05:59.765657  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:59.766010  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:05:59.766035  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:05:59.766256  159354 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1124 14:05:59.771480  159354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:05:59.786206  159354 kubeadm.go:884] updating cluster {Name:test-preload-684261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-684261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:05:59.786419  159354 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1124 14:05:59.786535  159354 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:05:59.817348  159354 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1124 14:05:59.817433  159354 ssh_runner.go:195] Run: which lz4
	I1124 14:05:59.821563  159354 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1124 14:05:59.826174  159354 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1124 14:05:59.826218  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1124 14:06:01.126377  159354 crio.go:462] duration metric: took 1.304850783s to copy over tarball
	I1124 14:06:01.126454  159354 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1124 14:06:02.792546  159354 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.666064684s)
	I1124 14:06:02.792580  159354 crio.go:469] duration metric: took 1.666168709s to extract the tarball
	I1124 14:06:02.792591  159354 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1124 14:06:02.832219  159354 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:06:02.872845  159354 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:06:02.872877  159354 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:06:02.872886  159354 kubeadm.go:935] updating node { 192.168.39.58 8443 v1.32.0 crio true true} ...
	I1124 14:06:02.872989  159354 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-684261 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-684261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:06:02.873057  159354 ssh_runner.go:195] Run: crio config
	I1124 14:06:02.918216  159354 cni.go:84] Creating CNI manager for ""
	I1124 14:06:02.918243  159354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 14:06:02.918267  159354 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:06:02.918301  159354 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-684261 NodeName:test-preload-684261 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:06:02.918450  159354 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-684261"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:06:02.918522  159354 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1124 14:06:02.930205  159354 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:06:02.930287  159354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:06:02.941614  159354 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1124 14:06:02.960863  159354 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:06:02.980058  159354 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1124 14:06:02.999694  159354 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1124 14:06:03.003722  159354 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:06:03.017408  159354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:06:03.158255  159354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:06:03.193926  159354 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261 for IP: 192.168.39.58
	I1124 14:06:03.193953  159354 certs.go:195] generating shared ca certs ...
	I1124 14:06:03.193972  159354 certs.go:227] acquiring lock for ca certs: {Name:mkb6ec2dec3468295f1184b421b26a51902e7ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:06:03.194172  159354 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-132228/.minikube/ca.key
	I1124 14:06:03.194215  159354 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.key
	I1124 14:06:03.194234  159354 certs.go:257] generating profile certs ...
	I1124 14:06:03.194339  159354 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/client.key
	I1124 14:06:03.194409  159354 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/apiserver.key.e289714e
	I1124 14:06:03.194453  159354 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/proxy-client.key
	I1124 14:06:03.194562  159354 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/136268.pem (1338 bytes)
	W1124 14:06:03.194592  159354 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-132228/.minikube/certs/136268_empty.pem, impossibly tiny 0 bytes
	I1124 14:06:03.194604  159354 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 14:06:03.194627  159354 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:06:03.194649  159354 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:06:03.194672  159354 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/certs/key.pem (1675 bytes)
	I1124 14:06:03.194712  159354 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-132228/.minikube/files/etc/ssl/certs/1362682.pem (1708 bytes)
	I1124 14:06:03.195370  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:06:03.241362  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:06:03.281200  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:06:03.311070  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1124 14:06:03.339705  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 14:06:03.368330  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:06:03.397859  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:06:03.426818  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:06:03.455299  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/files/etc/ssl/certs/1362682.pem --> /usr/share/ca-certificates/1362682.pem (1708 bytes)
	I1124 14:06:03.485051  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:06:03.514138  159354 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-132228/.minikube/certs/136268.pem --> /usr/share/ca-certificates/136268.pem (1338 bytes)
	I1124 14:06:03.542666  159354 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:06:03.562029  159354 ssh_runner.go:195] Run: openssl version
	I1124 14:06:03.568518  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1362682.pem && ln -fs /usr/share/ca-certificates/1362682.pem /etc/ssl/certs/1362682.pem"
	I1124 14:06:03.581450  159354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1362682.pem
	I1124 14:06:03.586290  159354 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:22 /usr/share/ca-certificates/1362682.pem
	I1124 14:06:03.586352  159354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1362682.pem
	I1124 14:06:03.593207  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1362682.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:06:03.606043  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:06:03.619014  159354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:06:03.624210  159354 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:15 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:06:03.624286  159354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:06:03.631590  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:06:03.644554  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/136268.pem && ln -fs /usr/share/ca-certificates/136268.pem /etc/ssl/certs/136268.pem"
	I1124 14:06:03.657588  159354 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/136268.pem
	I1124 14:06:03.662974  159354 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:22 /usr/share/ca-certificates/136268.pem
	I1124 14:06:03.663048  159354 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/136268.pem
	I1124 14:06:03.670489  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/136268.pem /etc/ssl/certs/51391683.0"
	I1124 14:06:03.683623  159354 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:06:03.688910  159354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:06:03.696303  159354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:06:03.703687  159354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:06:03.711377  159354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:06:03.718848  159354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:06:03.726632  159354 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:06:03.734399  159354 kubeadm.go:401] StartCluster: {Name:test-preload-684261 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-684261 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:06:03.734510  159354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:06:03.734571  159354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:06:03.768439  159354 cri.go:89] found id: ""
	I1124 14:06:03.768534  159354 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:06:03.780945  159354 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:06:03.780971  159354 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:06:03.781020  159354 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:06:03.793041  159354 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:06:03.793608  159354 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-684261" does not appear in /home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 14:06:03.793754  159354 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-132228/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-684261" cluster setting kubeconfig missing "test-preload-684261" context setting]
	I1124 14:06:03.794147  159354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/kubeconfig: {Name:mk8ced9b1c350dbdaec836e11cf0177ea98a374d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:06:03.794830  159354 kapi.go:59] client config for test-preload-684261: &rest.Config{Host:"https://192.168.39.58:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/client.crt", KeyFile:"/home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/client.key", CAFile:"/home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 14:06:03.795380  159354 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 14:06:03.795402  159354 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 14:06:03.795409  159354 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 14:06:03.795415  159354 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 14:06:03.795422  159354 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 14:06:03.795905  159354 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:06:03.807591  159354 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.58
	I1124 14:06:03.807625  159354 kubeadm.go:1161] stopping kube-system containers ...
	I1124 14:06:03.807645  159354 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1124 14:06:03.807699  159354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:06:03.840065  159354 cri.go:89] found id: ""
	I1124 14:06:03.840148  159354 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1124 14:06:03.858193  159354 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:06:03.870302  159354 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:06:03.870324  159354 kubeadm.go:158] found existing configuration files:
	
	I1124 14:06:03.870372  159354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:06:03.881084  159354 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:06:03.881178  159354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:06:03.892356  159354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:06:03.903309  159354 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:06:03.903366  159354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:06:03.915069  159354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:06:03.925523  159354 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:06:03.925575  159354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:06:03.936804  159354 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:06:03.947526  159354 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:06:03.947599  159354 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:06:03.959454  159354 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:06:03.971281  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 14:06:04.023136  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 14:06:04.935120  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1124 14:06:05.185405  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 14:06:05.259694  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1124 14:06:05.327532  159354 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:06:05.327612  159354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:06:05.827707  159354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:06:06.328143  159354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:06:06.828450  159354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:06:07.328585  159354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:06:07.828407  159354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:06:07.862887  159354 api_server.go:72] duration metric: took 2.535367105s to wait for apiserver process to appear ...
	I1124 14:06:07.862920  159354 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:06:07.862945  159354 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1124 14:06:10.364433  159354 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 14:06:10.364477  159354 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 14:06:10.364496  159354 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1124 14:06:10.425075  159354 api_server.go:279] https://192.168.39.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 14:06:10.425114  159354 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 14:06:10.863816  159354 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1124 14:06:10.868287  159354 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:06:10.868314  159354 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:06:11.364017  159354 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1124 14:06:11.378196  159354 api_server.go:279] https://192.168.39.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:06:11.378253  159354 api_server.go:103] status: https://192.168.39.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:06:11.863797  159354 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1124 14:06:11.867971  159354 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1124 14:06:11.875286  159354 api_server.go:141] control plane version: v1.32.0
	I1124 14:06:11.875316  159354 api_server.go:131] duration metric: took 4.012388493s to wait for apiserver health ...
	I1124 14:06:11.875330  159354 cni.go:84] Creating CNI manager for ""
	I1124 14:06:11.875338  159354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 14:06:11.876858  159354 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1124 14:06:11.878100  159354 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1124 14:06:11.898072  159354 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1124 14:06:11.925038  159354 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:06:11.931931  159354 system_pods.go:59] 7 kube-system pods found
	I1124 14:06:11.931970  159354 system_pods.go:61] "coredns-668d6bf9bc-9nr9m" [ec65392f-6da3-4258-aa01-cf769896ea09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:06:11.931978  159354 system_pods.go:61] "etcd-test-preload-684261" [0a521f51-2106-491e-af81-3cbf154eb68a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:06:11.931990  159354 system_pods.go:61] "kube-apiserver-test-preload-684261" [631b1126-120f-4f7a-a014-1890ea518aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:06:11.931999  159354 system_pods.go:61] "kube-controller-manager-test-preload-684261" [ac11c44c-c34a-4f78-8fd1-d3869a77a20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:06:11.932008  159354 system_pods.go:61] "kube-proxy-f7rq5" [2e3868f6-829f-4036-80bd-525854bd54fa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 14:06:11.932021  159354 system_pods.go:61] "kube-scheduler-test-preload-684261" [487da816-7faa-4110-8788-e76348dfc421] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:06:11.932030  159354 system_pods.go:61] "storage-provisioner" [6f18e348-2d5b-4648-8495-b62dcf733f8b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 14:06:11.932040  159354 system_pods.go:74] duration metric: took 6.982298ms to wait for pod list to return data ...
	I1124 14:06:11.932048  159354 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:06:11.936806  159354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 14:06:11.936836  159354 node_conditions.go:123] node cpu capacity is 2
	I1124 14:06:11.936852  159354 node_conditions.go:105] duration metric: took 4.798659ms to run NodePressure ...
	I1124 14:06:11.936908  159354 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1124 14:06:12.206925  159354 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1124 14:06:12.210766  159354 kubeadm.go:744] kubelet initialised
	I1124 14:06:12.210786  159354 kubeadm.go:745] duration metric: took 3.836452ms waiting for restarted kubelet to initialise ...
	I1124 14:06:12.210802  159354 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 14:06:12.227092  159354 ops.go:34] apiserver oom_adj: -16
	I1124 14:06:12.227123  159354 kubeadm.go:602] duration metric: took 8.446144948s to restartPrimaryControlPlane
	I1124 14:06:12.227134  159354 kubeadm.go:403] duration metric: took 8.492746461s to StartCluster
	I1124 14:06:12.227150  159354 settings.go:142] acquiring lock: {Name:mk1b72f2bf40456dafe7bf268d29a6f5461b2aa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:06:12.227237  159354 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 14:06:12.227770  159354 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-132228/kubeconfig: {Name:mk8ced9b1c350dbdaec836e11cf0177ea98a374d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:06:12.228012  159354 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:06:12.228082  159354 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:06:12.228197  159354 addons.go:70] Setting storage-provisioner=true in profile "test-preload-684261"
	I1124 14:06:12.228220  159354 addons.go:239] Setting addon storage-provisioner=true in "test-preload-684261"
	W1124 14:06:12.228233  159354 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:06:12.228264  159354 host.go:66] Checking if "test-preload-684261" exists ...
	I1124 14:06:12.228271  159354 config.go:182] Loaded profile config "test-preload-684261": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1124 14:06:12.228269  159354 addons.go:70] Setting default-storageclass=true in profile "test-preload-684261"
	I1124 14:06:12.228311  159354 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-684261"
	I1124 14:06:12.230125  159354 out.go:179] * Verifying Kubernetes components...
	I1124 14:06:12.230839  159354 kapi.go:59] client config for test-preload-684261: &rest.Config{Host:"https://192.168.39.58:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/client.crt", KeyFile:"/home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/client.key", CAFile:"/home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 14:06:12.231130  159354 addons.go:239] Setting addon default-storageclass=true in "test-preload-684261"
	W1124 14:06:12.231169  159354 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:06:12.231193  159354 host.go:66] Checking if "test-preload-684261" exists ...
	I1124 14:06:12.231378  159354 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:06:12.231419  159354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:06:12.232548  159354 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:06:12.232563  159354 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:06:12.232744  159354 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:06:12.232761  159354 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:06:12.235737  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:06:12.235763  159354 main.go:143] libmachine: domain test-preload-684261 has defined MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:06:12.236237  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:06:12.236296  159354 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:95:7a:af", ip: ""} in network mk-test-preload-684261: {Iface:virbr1 ExpiryTime:2025-11-24 15:05:54 +0000 UTC Type:0 Mac:52:54:00:95:7a:af Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:test-preload-684261 Clientid:01:52:54:00:95:7a:af}
	I1124 14:06:12.236330  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:06:12.236359  159354 main.go:143] libmachine: domain test-preload-684261 has defined IP address 192.168.39.58 and MAC address 52:54:00:95:7a:af in network mk-test-preload-684261
	I1124 14:06:12.236532  159354 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/test-preload-684261/id_rsa Username:docker}
	I1124 14:06:12.236702  159354 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/test-preload-684261/id_rsa Username:docker}
	I1124 14:06:12.453887  159354 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:06:12.479601  159354 node_ready.go:35] waiting up to 6m0s for node "test-preload-684261" to be "Ready" ...
	I1124 14:06:12.487405  159354 node_ready.go:49] node "test-preload-684261" is "Ready"
	I1124 14:06:12.487436  159354 node_ready.go:38] duration metric: took 7.784976ms for node "test-preload-684261" to be "Ready" ...
	I1124 14:06:12.487453  159354 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:06:12.487526  159354 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:06:12.524844  159354 api_server.go:72] duration metric: took 296.79192ms to wait for apiserver process to appear ...
	I1124 14:06:12.524887  159354 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:06:12.524916  159354 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1124 14:06:12.530168  159354 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1124 14:06:12.531949  159354 api_server.go:141] control plane version: v1.32.0
	I1124 14:06:12.531974  159354 api_server.go:131] duration metric: took 7.079841ms to wait for apiserver health ...
	I1124 14:06:12.531983  159354 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:06:12.538366  159354 system_pods.go:59] 7 kube-system pods found
	I1124 14:06:12.538403  159354 system_pods.go:61] "coredns-668d6bf9bc-9nr9m" [ec65392f-6da3-4258-aa01-cf769896ea09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:06:12.538413  159354 system_pods.go:61] "etcd-test-preload-684261" [0a521f51-2106-491e-af81-3cbf154eb68a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:06:12.538423  159354 system_pods.go:61] "kube-apiserver-test-preload-684261" [631b1126-120f-4f7a-a014-1890ea518aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:06:12.538430  159354 system_pods.go:61] "kube-controller-manager-test-preload-684261" [ac11c44c-c34a-4f78-8fd1-d3869a77a20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:06:12.538435  159354 system_pods.go:61] "kube-proxy-f7rq5" [2e3868f6-829f-4036-80bd-525854bd54fa] Running
	I1124 14:06:12.538443  159354 system_pods.go:61] "kube-scheduler-test-preload-684261" [487da816-7faa-4110-8788-e76348dfc421] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:06:12.538448  159354 system_pods.go:61] "storage-provisioner" [6f18e348-2d5b-4648-8495-b62dcf733f8b] Running
	I1124 14:06:12.538457  159354 system_pods.go:74] duration metric: took 6.466902ms to wait for pod list to return data ...
	I1124 14:06:12.538481  159354 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:06:12.541222  159354 default_sa.go:45] found service account: "default"
	I1124 14:06:12.541242  159354 default_sa.go:55] duration metric: took 2.754113ms for default service account to be created ...
	I1124 14:06:12.541258  159354 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 14:06:12.545195  159354 system_pods.go:86] 7 kube-system pods found
	I1124 14:06:12.545223  159354 system_pods.go:89] "coredns-668d6bf9bc-9nr9m" [ec65392f-6da3-4258-aa01-cf769896ea09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 14:06:12.545230  159354 system_pods.go:89] "etcd-test-preload-684261" [0a521f51-2106-491e-af81-3cbf154eb68a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:06:12.545249  159354 system_pods.go:89] "kube-apiserver-test-preload-684261" [631b1126-120f-4f7a-a014-1890ea518aba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:06:12.545258  159354 system_pods.go:89] "kube-controller-manager-test-preload-684261" [ac11c44c-c34a-4f78-8fd1-d3869a77a20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:06:12.545273  159354 system_pods.go:89] "kube-proxy-f7rq5" [2e3868f6-829f-4036-80bd-525854bd54fa] Running
	I1124 14:06:12.545282  159354 system_pods.go:89] "kube-scheduler-test-preload-684261" [487da816-7faa-4110-8788-e76348dfc421] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:06:12.545290  159354 system_pods.go:89] "storage-provisioner" [6f18e348-2d5b-4648-8495-b62dcf733f8b] Running
	I1124 14:06:12.545299  159354 system_pods.go:126] duration metric: took 4.036166ms to wait for k8s-apps to be running ...
	I1124 14:06:12.545309  159354 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 14:06:12.545359  159354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:06:12.570650  159354 system_svc.go:56] duration metric: took 25.330884ms WaitForService to wait for kubelet
	I1124 14:06:12.570681  159354 kubeadm.go:587] duration metric: took 342.638048ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:06:12.570699  159354 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:06:12.572870  159354 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1124 14:06:12.572891  159354 node_conditions.go:123] node cpu capacity is 2
	I1124 14:06:12.572904  159354 node_conditions.go:105] duration metric: took 2.199411ms to run NodePressure ...
	I1124 14:06:12.572917  159354 start.go:242] waiting for startup goroutines ...
	I1124 14:06:12.703661  159354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:06:12.706042  159354 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:06:13.351521  159354 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 14:06:13.352758  159354 addons.go:530] duration metric: took 1.124677651s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 14:06:13.352800  159354 start.go:247] waiting for cluster config update ...
	I1124 14:06:13.352817  159354 start.go:256] writing updated cluster config ...
	I1124 14:06:13.353078  159354 ssh_runner.go:195] Run: rm -f paused
	I1124 14:06:13.358342  159354 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:06:13.358838  159354 kapi.go:59] client config for test-preload-684261: &rest.Config{Host:"https://192.168.39.58:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/client.crt", KeyFile:"/home/jenkins/minikube-integration/21932-132228/.minikube/profiles/test-preload-684261/client.key", CAFile:"/home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 14:06:13.362260  159354 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-9nr9m" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:06:15.368984  159354 pod_ready.go:104] pod "coredns-668d6bf9bc-9nr9m" is not "Ready", error: <nil>
	W1124 14:06:17.868002  159354 pod_ready.go:104] pod "coredns-668d6bf9bc-9nr9m" is not "Ready", error: <nil>
	I1124 14:06:19.869201  159354 pod_ready.go:94] pod "coredns-668d6bf9bc-9nr9m" is "Ready"
	I1124 14:06:19.869235  159354 pod_ready.go:86] duration metric: took 6.506944706s for pod "coredns-668d6bf9bc-9nr9m" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:19.872603  159354 pod_ready.go:83] waiting for pod "etcd-test-preload-684261" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:06:21.879030  159354 pod_ready.go:104] pod "etcd-test-preload-684261" is not "Ready", error: <nil>
	I1124 14:06:23.379081  159354 pod_ready.go:94] pod "etcd-test-preload-684261" is "Ready"
	I1124 14:06:23.379130  159354 pod_ready.go:86] duration metric: took 3.50650423s for pod "etcd-test-preload-684261" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:23.381368  159354 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-684261" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 14:06:25.387581  159354 pod_ready.go:104] pod "kube-apiserver-test-preload-684261" is not "Ready", error: <nil>
	I1124 14:06:26.887236  159354 pod_ready.go:94] pod "kube-apiserver-test-preload-684261" is "Ready"
	I1124 14:06:26.887288  159354 pod_ready.go:86] duration metric: took 3.50589028s for pod "kube-apiserver-test-preload-684261" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:26.889397  159354 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-684261" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:26.893247  159354 pod_ready.go:94] pod "kube-controller-manager-test-preload-684261" is "Ready"
	I1124 14:06:26.893273  159354 pod_ready.go:86] duration metric: took 3.855502ms for pod "kube-controller-manager-test-preload-684261" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:26.895388  159354 pod_ready.go:83] waiting for pod "kube-proxy-f7rq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:26.899841  159354 pod_ready.go:94] pod "kube-proxy-f7rq5" is "Ready"
	I1124 14:06:26.899863  159354 pod_ready.go:86] duration metric: took 4.454287ms for pod "kube-proxy-f7rq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:26.901535  159354 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-684261" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:27.085789  159354 pod_ready.go:94] pod "kube-scheduler-test-preload-684261" is "Ready"
	I1124 14:06:27.085818  159354 pod_ready.go:86] duration metric: took 184.265671ms for pod "kube-scheduler-test-preload-684261" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:06:27.085829  159354 pod_ready.go:40] duration metric: took 13.727457215s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:06:27.133340  159354 start.go:625] kubectl: 1.34.2, cluster: 1.32.0 (minor skew: 2)
	I1124 14:06:27.135972  159354 out.go:203] 
	W1124 14:06:27.137235  159354 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.32.0.
	I1124 14:06:27.138323  159354 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1124 14:06:27.139521  159354 out.go:179] * Done! kubectl is now configured to use "test-preload-684261" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.914598430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763993187914575539,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f290c3c0-452d-4df2-95d5-1b6bec983336 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.915500519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45eb26e1-8670-4a5d-a602-d2fc3eab114a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.915591145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45eb26e1-8670-4a5d-a602-d2fc3eab114a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.915756504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00daced45f708e2ae017bf28debf98a1f236435e6d8d64f527df139ce07a8c03,PodSandboxId:3b778b5b928afe9a63a508a10be4024a5a940a0b4c71ba58a2e112de038ffd22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763993175345447398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9nr9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec65392f-6da3-4258-aa01-cf769896ea09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d740a23ef5b3845eb8e9ce739b2985deb97def015fbbc6fe086d4c53cfe9e570,PodSandboxId:613827bc920bd5e309df5abfc19e6d89270e5b09bce95ea55d1120c65724d038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763993171769082376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7rq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e3868f6-829f-4036-80bd-525854bd54fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b15cac94a56a75e637c3984d510817b30321705ef347465ca6b6ded0373aae1,PodSandboxId:89b20f4a116fa3fa7e6d9059b3e9bd7b30663ada33529f066cadca60c6cda79a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763993171757591957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f
18e348-2d5b-4648-8495-b62dcf733f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd4ce9806b7f926d02779de5d590fb182e708214a28d445266f66b36d5e8235,PodSandboxId:bcd9489e5740de1eace31b1b99dcbebe7719b18526e578cd38788f5a14970f21,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763993167489661675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64c1ef1462547e47f8bc378225482620,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53aa0d2845710b06d7252026c12ccb7d0d2975bbf954fcb5fca8f7e79f1b5ad7,PodSandboxId:4698902117f1f9bc89174ca5d3869cd0e839d8ad457fa67d60179e78361504ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763993167458781990,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a306e6d77af9ff81952de5bdf05fdf04,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43fe823b16148bf3f45c3d4a6da05eda43d18a4cd60b33d50234cbe1833ab47,PodSandboxId:283812de0181632b716ef8f3edcdfbcbd8843d87e7daec180eadc9fe799b1061,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763993167448356781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4642a4601d03888eee6ab056a0a741bc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f66d9cf2d2778d5e45ac1a958c7bd2330f8480b2551fbde17fa00d43ec9a326,PodSandboxId:a2555fc11c0d64627a017967bd136b83d2a775149320ea6d506a2683f6747cef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763993167436336937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed670ef4c65bc83b70adb207727a816,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45eb26e1-8670-4a5d-a602-d2fc3eab114a name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.948812585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18320418-2ab6-40ed-ae04-7091fb93c8e5 name=/runtime.v1.RuntimeService/Version
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.948902525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18320418-2ab6-40ed-ae04-7091fb93c8e5 name=/runtime.v1.RuntimeService/Version
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.950017672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=407b1f8c-9436-4782-b27e-baaa905a158f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.950461195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763993187950435103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=407b1f8c-9436-4782-b27e-baaa905a158f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.951320542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a17bf16-88cc-4b7f-8a54-622174d7f59c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.951385375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a17bf16-88cc-4b7f-8a54-622174d7f59c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.951538550Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00daced45f708e2ae017bf28debf98a1f236435e6d8d64f527df139ce07a8c03,PodSandboxId:3b778b5b928afe9a63a508a10be4024a5a940a0b4c71ba58a2e112de038ffd22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763993175345447398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9nr9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec65392f-6da3-4258-aa01-cf769896ea09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d740a23ef5b3845eb8e9ce739b2985deb97def015fbbc6fe086d4c53cfe9e570,PodSandboxId:613827bc920bd5e309df5abfc19e6d89270e5b09bce95ea55d1120c65724d038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763993171769082376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7rq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e3868f6-829f-4036-80bd-525854bd54fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b15cac94a56a75e637c3984d510817b30321705ef347465ca6b6ded0373aae1,PodSandboxId:89b20f4a116fa3fa7e6d9059b3e9bd7b30663ada33529f066cadca60c6cda79a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763993171757591957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f
18e348-2d5b-4648-8495-b62dcf733f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd4ce9806b7f926d02779de5d590fb182e708214a28d445266f66b36d5e8235,PodSandboxId:bcd9489e5740de1eace31b1b99dcbebe7719b18526e578cd38788f5a14970f21,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763993167489661675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64c1ef1462547e47f8bc378225482620,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53aa0d2845710b06d7252026c12ccb7d0d2975bbf954fcb5fca8f7e79f1b5ad7,PodSandboxId:4698902117f1f9bc89174ca5d3869cd0e839d8ad457fa67d60179e78361504ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763993167458781990,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a306e6d77af9ff81952de5bdf05fdf04,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43fe823b16148bf3f45c3d4a6da05eda43d18a4cd60b33d50234cbe1833ab47,PodSandboxId:283812de0181632b716ef8f3edcdfbcbd8843d87e7daec180eadc9fe799b1061,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763993167448356781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4642a4601d03888eee6ab056a0a741bc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f66d9cf2d2778d5e45ac1a958c7bd2330f8480b2551fbde17fa00d43ec9a326,PodSandboxId:a2555fc11c0d64627a017967bd136b83d2a775149320ea6d506a2683f6747cef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763993167436336937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed670ef4c65bc83b70adb207727a816,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a17bf16-88cc-4b7f-8a54-622174d7f59c name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.985251585Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b666580-a714-4b70-bbde-107e9e516ce6 name=/runtime.v1.RuntimeService/Version
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.985326679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b666580-a714-4b70-bbde-107e9e516ce6 name=/runtime.v1.RuntimeService/Version
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.986962039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db363796-f2cb-4b01-8c7d-8a885e7267ed name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.987466992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763993187987442168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db363796-f2cb-4b01-8c7d-8a885e7267ed name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.988314419Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09e99b8f-aa8a-4df4-b74e-98428d5536c8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.988543144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09e99b8f-aa8a-4df4-b74e-98428d5536c8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:27 test-preload-684261 crio[836]: time="2025-11-24 14:06:27.989044801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00daced45f708e2ae017bf28debf98a1f236435e6d8d64f527df139ce07a8c03,PodSandboxId:3b778b5b928afe9a63a508a10be4024a5a940a0b4c71ba58a2e112de038ffd22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763993175345447398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9nr9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec65392f-6da3-4258-aa01-cf769896ea09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d740a23ef5b3845eb8e9ce739b2985deb97def015fbbc6fe086d4c53cfe9e570,PodSandboxId:613827bc920bd5e309df5abfc19e6d89270e5b09bce95ea55d1120c65724d038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763993171769082376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7rq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e3868f6-829f-4036-80bd-525854bd54fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b15cac94a56a75e637c3984d510817b30321705ef347465ca6b6ded0373aae1,PodSandboxId:89b20f4a116fa3fa7e6d9059b3e9bd7b30663ada33529f066cadca60c6cda79a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763993171757591957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f
18e348-2d5b-4648-8495-b62dcf733f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd4ce9806b7f926d02779de5d590fb182e708214a28d445266f66b36d5e8235,PodSandboxId:bcd9489e5740de1eace31b1b99dcbebe7719b18526e578cd38788f5a14970f21,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763993167489661675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64c1ef1462547e47f8bc378225482620,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53aa0d2845710b06d7252026c12ccb7d0d2975bbf954fcb5fca8f7e79f1b5ad7,PodSandboxId:4698902117f1f9bc89174ca5d3869cd0e839d8ad457fa67d60179e78361504ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763993167458781990,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a306e6d77af9ff81952de5bdf05fdf04,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43fe823b16148bf3f45c3d4a6da05eda43d18a4cd60b33d50234cbe1833ab47,PodSandboxId:283812de0181632b716ef8f3edcdfbcbd8843d87e7daec180eadc9fe799b1061,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763993167448356781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4642a4601d03888eee6ab056a0a741bc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f66d9cf2d2778d5e45ac1a958c7bd2330f8480b2551fbde17fa00d43ec9a326,PodSandboxId:a2555fc11c0d64627a017967bd136b83d2a775149320ea6d506a2683f6747cef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763993167436336937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed670ef4c65bc83b70adb207727a816,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09e99b8f-aa8a-4df4-b74e-98428d5536c8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:28 test-preload-684261 crio[836]: time="2025-11-24 14:06:28.020456251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7d733b2-da83-45ce-9fca-75e1ef87b401 name=/runtime.v1.RuntimeService/Version
	Nov 24 14:06:28 test-preload-684261 crio[836]: time="2025-11-24 14:06:28.020534848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7d733b2-da83-45ce-9fca-75e1ef87b401 name=/runtime.v1.RuntimeService/Version
	Nov 24 14:06:28 test-preload-684261 crio[836]: time="2025-11-24 14:06:28.022452531Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=876839e4-3d1b-4a72-858a-9aa25c83cbbb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 14:06:28 test-preload-684261 crio[836]: time="2025-11-24 14:06:28.022879101Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763993188022855092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=876839e4-3d1b-4a72-858a-9aa25c83cbbb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 24 14:06:28 test-preload-684261 crio[836]: time="2025-11-24 14:06:28.023873496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcb95b60-3adc-4518-a39a-1370888ed9e1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:28 test-preload-684261 crio[836]: time="2025-11-24 14:06:28.024068849Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcb95b60-3adc-4518-a39a-1370888ed9e1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 24 14:06:28 test-preload-684261 crio[836]: time="2025-11-24 14:06:28.024367364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00daced45f708e2ae017bf28debf98a1f236435e6d8d64f527df139ce07a8c03,PodSandboxId:3b778b5b928afe9a63a508a10be4024a5a940a0b4c71ba58a2e112de038ffd22,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1763993175345447398,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-9nr9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec65392f-6da3-4258-aa01-cf769896ea09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d740a23ef5b3845eb8e9ce739b2985deb97def015fbbc6fe086d4c53cfe9e570,PodSandboxId:613827bc920bd5e309df5abfc19e6d89270e5b09bce95ea55d1120c65724d038,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1763993171769082376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7rq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 2e3868f6-829f-4036-80bd-525854bd54fa,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b15cac94a56a75e637c3984d510817b30321705ef347465ca6b6ded0373aae1,PodSandboxId:89b20f4a116fa3fa7e6d9059b3e9bd7b30663ada33529f066cadca60c6cda79a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763993171757591957,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f
18e348-2d5b-4648-8495-b62dcf733f8b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bd4ce9806b7f926d02779de5d590fb182e708214a28d445266f66b36d5e8235,PodSandboxId:bcd9489e5740de1eace31b1b99dcbebe7719b18526e578cd38788f5a14970f21,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1763993167489661675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64c1ef1462547e47f8bc378225482620,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53aa0d2845710b06d7252026c12ccb7d0d2975bbf954fcb5fca8f7e79f1b5ad7,PodSandboxId:4698902117f1f9bc89174ca5d3869cd0e839d8ad457fa67d60179e78361504ce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1763993167458781990,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a306e6d77af9ff81952de5bdf05fdf04,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e43fe823b16148bf3f45c3d4a6da05eda43d18a4cd60b33d50234cbe1833ab47,PodSandboxId:283812de0181632b716ef8f3edcdfbcbd8843d87e7daec180eadc9fe799b1061,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1763993167448356781,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4642a4601d03888eee6ab056a0a741bc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f66d9cf2d2778d5e45ac1a958c7bd2330f8480b2551fbde17fa00d43ec9a326,PodSandboxId:a2555fc11c0d64627a017967bd136b83d2a775149320ea6d506a2683f6747cef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1763993167436336937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-684261,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ed670ef4c65bc83b70adb207727a816,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcb95b60-3adc-4518-a39a-1370888ed9e1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	00daced45f708       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Running             coredns                   1                   3b778b5b928af       coredns-668d6bf9bc-9nr9m                      kube-system
	d740a23ef5b38       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   613827bc920bd       kube-proxy-f7rq5                              kube-system
	0b15cac94a56a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   89b20f4a116fa       storage-provisioner                           kube-system
	6bd4ce9806b7f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   bcd9489e5740d       etcd-test-preload-684261                      kube-system
	53aa0d2845710       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   4698902117f1f       kube-scheduler-test-preload-684261            kube-system
	e43fe823b1614       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   283812de01816       kube-controller-manager-test-preload-684261   kube-system
	0f66d9cf2d277       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   a2555fc11c0d6       kube-apiserver-test-preload-684261            kube-system
	
	
	==> coredns [00daced45f708e2ae017bf28debf98a1f236435e6d8d64f527df139ce07a8c03] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43075 - 29998 "HINFO IN 7024503012046951834.1669549262584906256. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036933032s
	
	
	==> describe nodes <==
	Name:               test-preload-684261
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-684261
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=test-preload-684261
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T14_04_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 14:04:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-684261
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:06:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:06:12 +0000   Mon, 24 Nov 2025 14:04:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:06:12 +0000   Mon, 24 Nov 2025 14:04:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:06:12 +0000   Mon, 24 Nov 2025 14:04:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:06:12 +0000   Mon, 24 Nov 2025 14:06:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    test-preload-684261
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 f23877d2332a48f08a405b235ce60417
	  System UUID:                f23877d2-332a-48f0-8a40-5b235ce60417
	  Boot ID:                    d2588091-7562-47e5-ba6f-2ad5ea643b75
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-9nr9m                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     109s
	  kube-system                 etcd-test-preload-684261                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         114s
	  kube-system                 kube-apiserver-test-preload-684261             250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-test-preload-684261    200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-f7rq5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-test-preload-684261             100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 107s               kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   Starting                 2m                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node test-preload-684261 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node test-preload-684261 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m (x7 over 2m)    kubelet          Node test-preload-684261 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    114s               kubelet          Node test-preload-684261 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  114s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  114s               kubelet          Node test-preload-684261 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     114s               kubelet          Node test-preload-684261 status is now: NodeHasSufficientPID
	  Normal   Starting                 114s               kubelet          Starting kubelet.
	  Normal   NodeReady                113s               kubelet          Node test-preload-684261 status is now: NodeReady
	  Normal   RegisteredNode           110s               node-controller  Node test-preload-684261 event: Registered Node test-preload-684261 in Controller
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-684261 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-684261 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-684261 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-684261 has been rebooted, boot id: d2588091-7562-47e5-ba6f-2ad5ea643b75
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-684261 event: Registered Node test-preload-684261 in Controller
	
	
	==> dmesg <==
	[Nov24 14:05] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000039] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000836] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.964614] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov24 14:06] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.621095] kauditd_printk_skb: 205 callbacks suppressed
	[  +4.357256] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [6bd4ce9806b7f926d02779de5d590fb182e708214a28d445266f66b36d5e8235] <==
	{"level":"info","ts":"2025-11-24T14:06:07.846372Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","added-peer-id":"ded7f9817c909548","added-peer-peer-urls":["https://192.168.39.58:2380"]}
	{"level":"info","ts":"2025-11-24T14:06:07.846480Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"91c640bc00cd2aea","local-member-id":"ded7f9817c909548","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:06:07.846521Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T14:06:07.848443Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T14:06:07.853666Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T14:06:07.854530Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2025-11-24T14:06:07.857996Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.58:2380"}
	{"level":"info","ts":"2025-11-24T14:06:07.860060Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ded7f9817c909548","initial-advertise-peer-urls":["https://192.168.39.58:2380"],"listen-peer-urls":["https://192.168.39.58:2380"],"advertise-client-urls":["https://192.168.39.58:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.58:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T14:06:07.860160Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T14:06:09.303626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T14:06:09.303680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T14:06:09.303697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgPreVoteResp from ded7f9817c909548 at term 2"}
	{"level":"info","ts":"2025-11-24T14:06:09.303707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T14:06:09.303713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 received MsgVoteResp from ded7f9817c909548 at term 3"}
	{"level":"info","ts":"2025-11-24T14:06:09.303720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ded7f9817c909548 became leader at term 3"}
	{"level":"info","ts":"2025-11-24T14:06:09.303727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ded7f9817c909548 elected leader ded7f9817c909548 at term 3"}
	{"level":"info","ts":"2025-11-24T14:06:09.306000Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ded7f9817c909548","local-member-attributes":"{Name:test-preload-684261 ClientURLs:[https://192.168.39.58:2379]}","request-path":"/0/members/ded7f9817c909548/attributes","cluster-id":"91c640bc00cd2aea","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T14:06:09.306040Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:06:09.306181Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T14:06:09.306238Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T14:06:09.306309Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T14:06:09.306814Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T14:06:09.306883Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-24T14:06:09.307487Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T14:06:09.307627Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.58:2379"}
	
	
	==> kernel <==
	 14:06:28 up 0 min,  0 users,  load average: 0.61, 0.17, 0.06
	Linux test-preload-684261 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0f66d9cf2d2778d5e45ac1a958c7bd2330f8480b2551fbde17fa00d43ec9a326] <==
	I1124 14:06:10.451669       1 policy_source.go:240] refreshing policies
	I1124 14:06:10.509127       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 14:06:10.509297       1 shared_informer.go:320] Caches are synced for configmaps
	I1124 14:06:10.509360       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 14:06:10.509368       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:06:10.509761       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:06:10.510005       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1124 14:06:10.510035       1 aggregator.go:171] initial CRD sync complete...
	I1124 14:06:10.510041       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 14:06:10.510045       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:06:10.510049       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:06:10.511256       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1124 14:06:10.511341       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 14:06:10.514059       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:06:10.525553       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:06:11.318358       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:06:11.329360       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	W1124 14:06:11.729847       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.58]
	I1124 14:06:11.734444       1 controller.go:615] quota admission added evaluator for: endpoints
	I1124 14:06:11.745135       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:06:12.040058       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1124 14:06:12.084656       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1124 14:06:12.110548       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:06:12.116397       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:06:13.619549       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e43fe823b16148bf3f45c3d4a6da05eda43d18a4cd60b33d50234cbe1833ab47] <==
	I1124 14:06:13.628110       1 shared_informer.go:320] Caches are synced for GC
	I1124 14:06:13.630289       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1124 14:06:13.636574       1 shared_informer.go:320] Caches are synced for resource quota
	I1124 14:06:13.648997       1 shared_informer.go:320] Caches are synced for node
	I1124 14:06:13.649329       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:06:13.649544       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:06:13.650248       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1124 14:06:13.650290       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1124 14:06:13.650402       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-684261"
	I1124 14:06:13.652532       1 shared_informer.go:320] Caches are synced for PV protection
	I1124 14:06:13.655372       1 shared_informer.go:320] Caches are synced for garbage collector
	I1124 14:06:13.655387       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:06:13.655393       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:06:13.659040       1 shared_informer.go:320] Caches are synced for PVC protection
	I1124 14:06:13.661124       1 shared_informer.go:320] Caches are synced for daemon sets
	I1124 14:06:13.664114       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1124 14:06:13.664811       1 shared_informer.go:320] Caches are synced for deployment
	I1124 14:06:13.665390       1 shared_informer.go:320] Caches are synced for attach detach
	I1124 14:06:13.665715       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1124 14:06:13.665746       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1124 14:06:13.665999       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1124 14:06:13.673181       1 shared_informer.go:320] Caches are synced for garbage collector
	I1124 14:06:16.434893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="48.238µs"
	I1124 14:06:19.548254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.471533ms"
	I1124 14:06:19.549189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="72.64µs"
	
	
	==> kube-proxy [d740a23ef5b3845eb8e9ce739b2985deb97def015fbbc6fe086d4c53cfe9e570] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1124 14:06:11.973757       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1124 14:06:11.988230       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.58"]
	E1124 14:06:11.988284       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:06:12.059777       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1124 14:06:12.060209       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1124 14:06:12.060325       1 server_linux.go:170] "Using iptables Proxier"
	I1124 14:06:12.065745       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:06:12.066364       1 server.go:497] "Version info" version="v1.32.0"
	I1124 14:06:12.066391       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:06:12.070335       1 config.go:105] "Starting endpoint slice config controller"
	I1124 14:06:12.070423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1124 14:06:12.070544       1 config.go:199] "Starting service config controller"
	I1124 14:06:12.070738       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1124 14:06:12.072530       1 config.go:329] "Starting node config controller"
	I1124 14:06:12.072614       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1124 14:06:12.170941       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1124 14:06:12.171045       1 shared_informer.go:320] Caches are synced for service config
	I1124 14:06:12.172934       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [53aa0d2845710b06d7252026c12ccb7d0d2975bbf954fcb5fca8f7e79f1b5ad7] <==
	I1124 14:06:07.967939       1 serving.go:386] Generated self-signed cert in-memory
	W1124 14:06:10.396218       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 14:06:10.397030       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 14:06:10.397130       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 14:06:10.397154       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 14:06:10.435211       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1124 14:06:10.435243       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:06:10.437061       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:06:10.437089       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 14:06:10.437338       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1124 14:06:10.437431       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:06:10.538718       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 14:06:10 test-preload-684261 kubelet[1169]: I1124 14:06:10.544360    1169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-684261"
	Nov 24 14:06:10 test-preload-684261 kubelet[1169]: E1124 14:06:10.552075    1169 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-684261\" already exists" pod="kube-system/kube-controller-manager-test-preload-684261"
	Nov 24 14:06:10 test-preload-684261 kubelet[1169]: I1124 14:06:10.552100    1169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-684261"
	Nov 24 14:06:10 test-preload-684261 kubelet[1169]: E1124 14:06:10.559464    1169 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-684261\" already exists" pod="kube-system/kube-scheduler-test-preload-684261"
	Nov 24 14:06:10 test-preload-684261 kubelet[1169]: I1124 14:06:10.559490    1169 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-684261"
	Nov 24 14:06:10 test-preload-684261 kubelet[1169]: E1124 14:06:10.566346    1169 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-684261\" already exists" pod="kube-system/etcd-test-preload-684261"
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: I1124 14:06:11.272320    1169 apiserver.go:52] "Watching apiserver"
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: E1124 14:06:11.276702    1169 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-9nr9m" podUID="ec65392f-6da3-4258-aa01-cf769896ea09"
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: I1124 14:06:11.286523    1169 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: I1124 14:06:11.325293    1169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e3868f6-829f-4036-80bd-525854bd54fa-lib-modules\") pod \"kube-proxy-f7rq5\" (UID: \"2e3868f6-829f-4036-80bd-525854bd54fa\") " pod="kube-system/kube-proxy-f7rq5"
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: I1124 14:06:11.325335    1169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6f18e348-2d5b-4648-8495-b62dcf733f8b-tmp\") pod \"storage-provisioner\" (UID: \"6f18e348-2d5b-4648-8495-b62dcf733f8b\") " pod="kube-system/storage-provisioner"
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: I1124 14:06:11.325376    1169 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e3868f6-829f-4036-80bd-525854bd54fa-xtables-lock\") pod \"kube-proxy-f7rq5\" (UID: \"2e3868f6-829f-4036-80bd-525854bd54fa\") " pod="kube-system/kube-proxy-f7rq5"
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: E1124 14:06:11.326102    1169 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: E1124 14:06:11.327196    1169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec65392f-6da3-4258-aa01-cf769896ea09-config-volume podName:ec65392f-6da3-4258-aa01-cf769896ea09 nodeName:}" failed. No retries permitted until 2025-11-24 14:06:11.827176546 +0000 UTC m=+6.675241448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ec65392f-6da3-4258-aa01-cf769896ea09-config-volume") pod "coredns-668d6bf9bc-9nr9m" (UID: "ec65392f-6da3-4258-aa01-cf769896ea09") : object "kube-system"/"coredns" not registered
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: E1124 14:06:11.829362    1169 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 14:06:11 test-preload-684261 kubelet[1169]: E1124 14:06:11.829423    1169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec65392f-6da3-4258-aa01-cf769896ea09-config-volume podName:ec65392f-6da3-4258-aa01-cf769896ea09 nodeName:}" failed. No retries permitted until 2025-11-24 14:06:12.829409002 +0000 UTC m=+7.677473916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ec65392f-6da3-4258-aa01-cf769896ea09-config-volume") pod "coredns-668d6bf9bc-9nr9m" (UID: "ec65392f-6da3-4258-aa01-cf769896ea09") : object "kube-system"/"coredns" not registered
	Nov 24 14:06:12 test-preload-684261 kubelet[1169]: I1124 14:06:12.145799    1169 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 24 14:06:12 test-preload-684261 kubelet[1169]: E1124 14:06:12.837568    1169 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 14:06:12 test-preload-684261 kubelet[1169]: E1124 14:06:12.838395    1169 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ec65392f-6da3-4258-aa01-cf769896ea09-config-volume podName:ec65392f-6da3-4258-aa01-cf769896ea09 nodeName:}" failed. No retries permitted until 2025-11-24 14:06:14.838107721 +0000 UTC m=+9.686172622 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ec65392f-6da3-4258-aa01-cf769896ea09-config-volume") pod "coredns-668d6bf9bc-9nr9m" (UID: "ec65392f-6da3-4258-aa01-cf769896ea09") : object "kube-system"/"coredns" not registered
	Nov 24 14:06:15 test-preload-684261 kubelet[1169]: E1124 14:06:15.357675    1169 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763993175343907600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 24 14:06:15 test-preload-684261 kubelet[1169]: E1124 14:06:15.357699    1169 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763993175343907600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 24 14:06:17 test-preload-684261 kubelet[1169]: I1124 14:06:17.422575    1169 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 14:06:19 test-preload-684261 kubelet[1169]: I1124 14:06:19.519565    1169 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 14:06:25 test-preload-684261 kubelet[1169]: E1124 14:06:25.360399    1169 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763993185360061504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 24 14:06:25 test-preload-684261 kubelet[1169]: E1124 14:06:25.360439    1169 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763993185360061504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0b15cac94a56a75e637c3984d510817b30321705ef347465ca6b6ded0373aae1] <==
	I1124 14:06:11.857543       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-684261 -n test-preload-684261
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-684261 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-684261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-684261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-684261: (1.019014439s)
--- FAIL: TestPreload (162.45s)

                                                
                                    

Test pass (309/351)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 26.13
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 13.06
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
22 TestOffline 80.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 126.37
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 11.54
35 TestAddons/parallel/Registry 22.33
36 TestAddons/parallel/RegistryCreds 0.65
38 TestAddons/parallel/InspektorGadget 11.67
39 TestAddons/parallel/MetricsServer 7.31
41 TestAddons/parallel/CSI 73.48
42 TestAddons/parallel/Headlamp 25.33
43 TestAddons/parallel/CloudSpanner 5.86
44 TestAddons/parallel/LocalPath 19.38
45 TestAddons/parallel/NvidiaDevicePlugin 6.81
46 TestAddons/parallel/Yakd 11.7
48 TestAddons/StoppedEnableDisable 88.93
49 TestCertOptions 76.58
50 TestCertExpiration 277.34
52 TestForceSystemdFlag 58.43
53 TestForceSystemdEnv 71.73
58 TestErrorSpam/setup 36.37
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.68
61 TestErrorSpam/pause 1.52
62 TestErrorSpam/unpause 1.69
63 TestErrorSpam/stop 5.05
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 83.38
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 43.13
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
75 TestFunctional/serial/CacheCmd/cache/add_local 2.25
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.45
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 58.63
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.21
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 4.24
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 14.35
91 TestFunctional/parallel/DryRun 0.27
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 0.79
97 TestFunctional/parallel/ServiceCmdConnect 13.21
98 TestFunctional/parallel/AddonsCmd 0.18
99 TestFunctional/parallel/PersistentVolumeClaim 44.23
101 TestFunctional/parallel/SSHCmd 0.36
102 TestFunctional/parallel/CpCmd 1.2
103 TestFunctional/parallel/MySQL 27.72
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.34
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
113 TestFunctional/parallel/License 0.48
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
116 TestFunctional/parallel/ProfileCmd/profile_list 0.47
117 TestFunctional/parallel/MountCmd/any-port 10.27
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
119 TestFunctional/parallel/Version/short 0.07
120 TestFunctional/parallel/Version/components 0.42
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
125 TestFunctional/parallel/ImageCommands/ImageBuild 4.09
126 TestFunctional/parallel/ImageCommands/Setup 2
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.16
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
132 TestFunctional/parallel/ServiceCmd/List 0.44
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.93
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
136 TestFunctional/parallel/ServiceCmd/Format 0.33
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.9
138 TestFunctional/parallel/ServiceCmd/URL 0.38
139 TestFunctional/parallel/MountCmd/specific-port 1.41
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.34
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 200.69
161 TestMultiControlPlane/serial/DeployApp 9.2
162 TestMultiControlPlane/serial/PingHostFromPods 1.34
163 TestMultiControlPlane/serial/AddWorkerNode 44.16
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
166 TestMultiControlPlane/serial/CopyFile 10.72
167 TestMultiControlPlane/serial/StopSecondaryNode 88.86
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
169 TestMultiControlPlane/serial/RestartSecondaryNode 35.53
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.7
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 366.1
172 TestMultiControlPlane/serial/DeleteSecondaryNode 19.02
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.49
174 TestMultiControlPlane/serial/StopCluster 251.87
175 TestMultiControlPlane/serial/RestartCluster 77.75
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
177 TestMultiControlPlane/serial/AddSecondaryNode 81.02
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.65
183 TestJSONOutput/start/Command 74.84
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.73
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.64
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.84
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.25
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 75.16
215 TestMountStart/serial/StartWithMountFirst 20.2
216 TestMountStart/serial/VerifyMountFirst 0.31
217 TestMountStart/serial/StartWithMountSecond 20.16
218 TestMountStart/serial/VerifyMountSecond 0.32
219 TestMountStart/serial/DeleteFirst 0.71
220 TestMountStart/serial/VerifyMountPostDelete 0.33
221 TestMountStart/serial/Stop 1.27
222 TestMountStart/serial/RestartStopped 18.54
223 TestMountStart/serial/VerifyMountPostStop 0.32
226 TestMultiNode/serial/FreshStart2Nodes 96.56
227 TestMultiNode/serial/DeployApp2Nodes 6.21
228 TestMultiNode/serial/PingHostFrom2Pods 0.87
229 TestMultiNode/serial/AddNode 41.17
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.43
232 TestMultiNode/serial/CopyFile 5.91
233 TestMultiNode/serial/StopNode 2.34
234 TestMultiNode/serial/StartAfterStop 36.71
235 TestMultiNode/serial/RestartKeepsNodes 283.81
236 TestMultiNode/serial/DeleteNode 2.61
237 TestMultiNode/serial/StopMultiNode 166.26
238 TestMultiNode/serial/RestartMultiNode 113.89
239 TestMultiNode/serial/ValidateNameConflict 41.02
246 TestScheduledStopUnix 107.98
250 TestRunningBinaryUpgrade 157.92
252 TestKubernetesUpgrade 177.77
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 81.33
260 TestNoKubernetes/serial/StartWithStopK8s 27.75
265 TestNetworkPlugins/group/false 5.53
269 TestISOImage/Setup 30.28
270 TestNoKubernetes/serial/Start 44.62
272 TestISOImage/Binaries/crictl 0.18
273 TestISOImage/Binaries/curl 0.27
274 TestISOImage/Binaries/docker 0.32
275 TestISOImage/Binaries/git 0.4
276 TestISOImage/Binaries/iptables 0.26
277 TestISOImage/Binaries/podman 0.25
278 TestISOImage/Binaries/rsync 0.19
279 TestISOImage/Binaries/socat 0.23
280 TestISOImage/Binaries/wget 0.18
281 TestISOImage/Binaries/VBoxControl 0.17
282 TestISOImage/Binaries/VBoxService 0.17
283 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
285 TestNoKubernetes/serial/ProfileList 1.94
286 TestNoKubernetes/serial/Stop 1.27
287 TestNoKubernetes/serial/StartNoArgs 55.92
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
289 TestStoppedBinaryUpgrade/Setup 3.02
298 TestPause/serial/Start 118.49
299 TestStoppedBinaryUpgrade/Upgrade 141.56
300 TestNetworkPlugins/group/auto/Start 106.27
301 TestPause/serial/SecondStartNoReconfiguration 36.96
302 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
303 TestNetworkPlugins/group/kindnet/Start 91.44
304 TestPause/serial/Pause 0.76
305 TestPause/serial/VerifyStatus 0.25
306 TestPause/serial/Unpause 0.68
307 TestPause/serial/PauseAgain 0.79
308 TestPause/serial/DeletePaused 0.89
309 TestPause/serial/VerifyDeletedResources 0.54
310 TestNetworkPlugins/group/calico/Start 78.67
311 TestNetworkPlugins/group/auto/KubeletFlags 0.18
312 TestNetworkPlugins/group/auto/NetCatPod 11.26
313 TestNetworkPlugins/group/auto/DNS 0.3
314 TestNetworkPlugins/group/auto/Localhost 0.16
315 TestNetworkPlugins/group/auto/HairPin 0.16
316 TestNetworkPlugins/group/custom-flannel/Start 74.1
317 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/enable-default-cni/Start 83.82
320 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
321 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
322 TestNetworkPlugins/group/calico/KubeletFlags 0.21
323 TestNetworkPlugins/group/calico/NetCatPod 14.28
324 TestNetworkPlugins/group/kindnet/DNS 0.17
325 TestNetworkPlugins/group/kindnet/Localhost 0.12
326 TestNetworkPlugins/group/kindnet/HairPin 0.12
327 TestNetworkPlugins/group/calico/DNS 0.16
328 TestNetworkPlugins/group/calico/Localhost 0.12
329 TestNetworkPlugins/group/calico/HairPin 0.13
330 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
331 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.77
332 TestNetworkPlugins/group/flannel/Start 70.1
333 TestNetworkPlugins/group/bridge/Start 96.32
334 TestNetworkPlugins/group/custom-flannel/DNS 0.15
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
338 TestStartStop/group/old-k8s-version/serial/FirstStart 106.54
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.24
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
344 TestNetworkPlugins/group/flannel/ControllerPod 6.01
345 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
346 TestNetworkPlugins/group/flannel/NetCatPod 11.26
348 TestStartStop/group/no-preload/serial/FirstStart 98.95
349 TestNetworkPlugins/group/flannel/DNS 0.15
350 TestNetworkPlugins/group/flannel/Localhost 0.13
351 TestNetworkPlugins/group/flannel/HairPin 0.12
352 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
353 TestNetworkPlugins/group/bridge/NetCatPod 11.3
355 TestStartStop/group/embed-certs/serial/FirstStart 81.98
356 TestNetworkPlugins/group/bridge/DNS 0.16
357 TestNetworkPlugins/group/bridge/Localhost 0.15
358 TestNetworkPlugins/group/bridge/HairPin 0.15
360 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.56
361 TestStartStop/group/old-k8s-version/serial/DeployApp 11.39
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.13
363 TestStartStop/group/old-k8s-version/serial/Stop 87.06
364 TestStartStop/group/no-preload/serial/DeployApp 11.28
365 TestStartStop/group/embed-certs/serial/DeployApp 10.29
366 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
367 TestStartStop/group/no-preload/serial/Stop 71.42
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
369 TestStartStop/group/embed-certs/serial/Stop 88.24
370 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.26
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
372 TestStartStop/group/default-k8s-diff-port/serial/Stop 90.62
373 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
374 TestStartStop/group/old-k8s-version/serial/SecondStart 43.17
375 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.15
376 TestStartStop/group/no-preload/serial/SecondStart 56.76
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 16.01
378 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
379 TestStartStop/group/embed-certs/serial/SecondStart 45.29
380 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
381 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
382 TestStartStop/group/old-k8s-version/serial/Pause 2.9
384 TestStartStop/group/newest-cni/serial/FirstStart 47.6
385 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
386 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.21
387 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
388 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 17.01
389 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
390 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
391 TestStartStop/group/no-preload/serial/Pause 2.88
393 TestISOImage/PersistentMounts//data 0.19
394 TestISOImage/PersistentMounts//var/lib/docker 0.17
395 TestISOImage/PersistentMounts//var/lib/cni 0.17
396 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
397 TestISOImage/PersistentMounts//var/lib/minikube 0.18
398 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
399 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
400 TestISOImage/VersionJSON 0.19
401 TestISOImage/eBPFSupport 0.19
402 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
403 TestStartStop/group/newest-cni/serial/DeployApp 0
404 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
405 TestStartStop/group/newest-cni/serial/Stop 11.63
406 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
407 TestStartStop/group/embed-certs/serial/Pause 2.84
408 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
409 TestStartStop/group/newest-cni/serial/SecondStart 31.47
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.25
414 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
415 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
416 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
417 TestStartStop/group/newest-cni/serial/Pause 2.13
x
+
TestDownloadOnly/v1.28.0/json-events (26.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-359874 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-359874 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (26.130157995s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (26.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 13:14:41.928057  136268 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1124 13:14:41.928156  136268 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-359874
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-359874: exit status 85 (73.728615ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-359874 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-359874 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:14:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:14:15.852776  136281 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:14:15.852871  136281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:15.852876  136281 out.go:374] Setting ErrFile to fd 2...
	I1124 13:14:15.852880  136281 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:15.853093  136281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	W1124 13:14:15.853274  136281 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21932-132228/.minikube/config/config.json: open /home/jenkins/minikube-integration/21932-132228/.minikube/config/config.json: no such file or directory
	I1124 13:14:15.853774  136281 out.go:368] Setting JSON to true
	I1124 13:14:15.855380  136281 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3379,"bootTime":1763986677,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:14:15.855444  136281 start.go:143] virtualization: kvm guest
	I1124 13:14:15.859087  136281 out.go:99] [download-only-359874] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1124 13:14:15.859227  136281 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 13:14:15.859285  136281 notify.go:221] Checking for updates...
	I1124 13:14:15.860542  136281 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:14:15.861767  136281 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:14:15.862770  136281 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 13:14:15.863888  136281 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	I1124 13:14:15.864978  136281 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 13:14:15.866858  136281 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:14:15.867257  136281 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:14:16.339706  136281 out.go:99] Using the kvm2 driver based on user configuration
	I1124 13:14:16.339742  136281 start.go:309] selected driver: kvm2
	I1124 13:14:16.339763  136281 start.go:927] validating driver "kvm2" against <nil>
	I1124 13:14:16.340171  136281 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:14:16.340794  136281 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1124 13:14:16.340969  136281 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:14:16.341002  136281 cni.go:84] Creating CNI manager for ""
	I1124 13:14:16.341081  136281 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 13:14:16.341096  136281 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 13:14:16.341177  136281 start.go:353] cluster config:
	{Name:download-only-359874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-359874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:14:16.341414  136281 iso.go:125] acquiring lock: {Name:mk70c2563fd35b13c556749f7252ab4e6e575da1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:14:16.342833  136281 out.go:99] Downloading VM boot image ...
	I1124 13:14:16.342862  136281 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21932-132228/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1124 13:14:27.988576  136281 out.go:99] Starting "download-only-359874" primary control-plane node in "download-only-359874" cluster
	I1124 13:14:27.988640  136281 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 13:14:28.092150  136281 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1124 13:14:28.092190  136281 cache.go:65] Caching tarball of preloaded images
	I1124 13:14:28.093034  136281 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 13:14:28.094611  136281 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 13:14:28.094639  136281 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1124 13:14:28.203469  136281 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1124 13:14:28.203603  136281 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-359874 host does not exist
	  To start a cluster, run: "minikube start -p download-only-359874"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-359874
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (13.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-238831 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-238831 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.058811851s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (13.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 13:14:55.366465  136268 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1124 13:14:55.366513  136268 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-238831
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-238831: exit status 85 (72.669339ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-359874 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-359874 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ delete  │ -p download-only-359874                                                                                                                                                 │ download-only-359874 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │ 24 Nov 25 13:14 UTC │
	│ start   │ -o=json --download-only -p download-only-238831 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-238831 │ jenkins │ v1.37.0 │ 24 Nov 25 13:14 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:14:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:14:42.360069  136546 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:14:42.360393  136546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:42.360404  136546 out.go:374] Setting ErrFile to fd 2...
	I1124 13:14:42.360408  136546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:14:42.360592  136546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 13:14:42.361034  136546 out.go:368] Setting JSON to true
	I1124 13:14:42.361984  136546 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3405,"bootTime":1763986677,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:14:42.362035  136546 start.go:143] virtualization: kvm guest
	I1124 13:14:42.363861  136546 out.go:99] [download-only-238831] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:14:42.364031  136546 notify.go:221] Checking for updates...
	I1124 13:14:42.365167  136546 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:14:42.366315  136546 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:14:42.367416  136546 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 13:14:42.368440  136546 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	I1124 13:14:42.369389  136546 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 13:14:42.371263  136546 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:14:42.371540  136546 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:14:42.404046  136546 out.go:99] Using the kvm2 driver based on user configuration
	I1124 13:14:42.404074  136546 start.go:309] selected driver: kvm2
	I1124 13:14:42.404080  136546 start.go:927] validating driver "kvm2" against <nil>
	I1124 13:14:42.404385  136546 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:14:42.404865  136546 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1124 13:14:42.404997  136546 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:14:42.405023  136546 cni.go:84] Creating CNI manager for ""
	I1124 13:14:42.405071  136546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1124 13:14:42.405080  136546 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1124 13:14:42.405133  136546 start.go:353] cluster config:
	{Name:download-only-238831 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-238831 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:14:42.405228  136546 iso.go:125] acquiring lock: {Name:mk70c2563fd35b13c556749f7252ab4e6e575da1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:14:42.406616  136546 out.go:99] Starting "download-only-238831" primary control-plane node in "download-only-238831" cluster
	I1124 13:14:42.406633  136546 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:42.508334  136546 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 13:14:42.508389  136546 cache.go:65] Caching tarball of preloaded images
	I1124 13:14:42.508575  136546 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:42.510232  136546 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1124 13:14:42.510247  136546 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1124 13:14:43.055893  136546 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1124 13:14:43.055948  136546 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21932-132228/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-238831 host does not exist
	  To start a cluster, run: "minikube start -p download-only-238831"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-238831
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 13:14:56.027205  136268 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-955621 --alsologtostderr --binary-mirror http://127.0.0.1:44405 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-955621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-955621
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (80.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-623203 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-623203 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.351741363s)
helpers_test.go:175: Cleaning up "offline-crio-623203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-623203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-623203: (2.068686651s)
--- PASS: TestOffline (80.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-377447
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-377447: exit status 85 (72.328957ms)

                                                
                                                
-- stdout --
	* Profile "addons-377447" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377447"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-377447
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-377447: exit status 85 (71.694137ms)

                                                
                                                
-- stdout --
	* Profile "addons-377447" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-377447"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (126.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-377447 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-377447 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.370541062s)
--- PASS: TestAddons/Setup (126.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-377447 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-377447 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-377447 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-377447 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [de78fc6e-5604-4ab6-a3d1-77bc45527e8f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [de78fc6e-5604-4ab6-a3d1-77bc45527e8f] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004098465s
addons_test.go:694: (dbg) Run:  kubectl --context addons-377447 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-377447 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-377447 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.370335ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-g2f52" [f6398e16-e752-4316-8684-c40140559c04] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007393856s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-gtc9t" [81f6349a-0cc7-42a4-9413-1690879cf35e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004581597s
addons_test.go:392: (dbg) Run:  kubectl --context addons-377447 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-377447 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-377447 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.576867672s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 ip
2025/11/24 13:17:45 [DEBUG] GET http://192.168.39.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.33s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.184115ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-377447
addons_test.go:332: (dbg) Run:  kubectl --context addons-377447 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-cbc7l" [626d47b2-7817-4dee-8381-99d28f9ba31c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004393444s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-377447 addons disable inspektor-gadget --alsologtostderr -v=1: (5.665584853s)
--- PASS: TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.700034ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-gwkcq" [88b0b2e7-4699-4713-844e-b052732bf03d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003594793s
addons_test.go:463: (dbg) Run:  kubectl --context addons-377447 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-377447 addons disable metrics-server --alsologtostderr -v=1: (1.229776797s)
--- PASS: TestAddons/parallel/MetricsServer (7.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (73.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 13:17:42.753364  136268 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 13:17:42.763509  136268 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 13:17:42.763533  136268 kapi.go:107] duration metric: took 10.193266ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 10.203759ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-377447 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-377447 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a1ed6da2-c84f-40cb-9fe4-9b78ab6934f8] Pending
helpers_test.go:352: "task-pv-pod" [a1ed6da2-c84f-40cb-9fe4-9b78ab6934f8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a1ed6da2-c84f-40cb-9fe4-9b78ab6934f8] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004211991s
addons_test.go:572: (dbg) Run:  kubectl --context addons-377447 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-377447 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-377447 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-377447 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-377447 delete pod task-pv-pod: (1.035238704s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-377447 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-377447 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-377447 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [28ec1903-42e5-45ff-9c87-0a13782d5200] Pending
helpers_test.go:352: "task-pv-pod-restore" [28ec1903-42e5-45ff-9c87-0a13782d5200] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [28ec1903-42e5-45ff-9c87-0a13782d5200] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003541201s
addons_test.go:614: (dbg) Run:  kubectl --context addons-377447 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-377447 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-377447 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-377447 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.745965525s)
--- PASS: TestAddons/parallel/CSI (73.48s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (25.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-377447 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-rdxcs" [6154bc27-36cc-45ff-87d7-6a336dbba427] Pending
helpers_test.go:352: "headlamp-dfcdc64b-rdxcs" [6154bc27-36cc-45ff-87d7-6a336dbba427] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-rdxcs" [6154bc27-36cc-45ff-87d7-6a336dbba427] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.008756203s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-377447 addons disable headlamp --alsologtostderr -v=1: (6.359470157s)
--- PASS: TestAddons/parallel/Headlamp (25.33s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-vh8qz" [1af8c77b-aa11-44e1-85d1-7e06f98be49e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.026551739s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.86s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (19.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-377447 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-377447 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [798c5f24-a0b6-4b38-859f-7525b4c8fb5f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [798c5f24-a0b6-4b38-859f-7525b4c8fb5f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [798c5f24-a0b6-4b38-859f-7525b4c8fb5f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 10.003737482s
addons_test.go:967: (dbg) Run:  kubectl --context addons-377447 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 ssh "cat /opt/local-path-provisioner/pvc-db4c394a-69ff-46d3-ab48-9593d3fc2b9a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-377447 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-377447 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (19.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-tdfqm" [d86dfc08-f0af-4c6a-a8ca-886da893bef3] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003691296s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.81s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-bkjsk" [87fcb82a-3d90-4787-b6ca-61e2f548b8f5] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003361377s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-377447 addons disable yakd --alsologtostderr -v=1: (5.698179274s)
--- PASS: TestAddons/parallel/Yakd (11.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (88.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-377447
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-377447: (1m28.718079351s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-377447
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-377447
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-377447
--- PASS: TestAddons/StoppedEnableDisable (88.93s)

                                                
                                    
x
+
TestCertOptions (76.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-023408 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1124 14:11:46.849396  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-023408 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m15.078629078s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-023408 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-023408 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-023408 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-023408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-023408
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-023408: (1.058214476s)
--- PASS: TestCertOptions (76.58s)

                                                
                                    
x
+
TestCertExpiration (277.34s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-534572 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-534572 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m13.445569555s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-534572 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-534572 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (23.020903553s)
helpers_test.go:175: Cleaning up "cert-expiration-534572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-534572
--- PASS: TestCertExpiration (277.34s)

                                                
                                    
x
+
TestForceSystemdFlag (58.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-433896 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1124 14:10:56.376750  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-433896 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.328588255s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-433896 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-433896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-433896
--- PASS: TestForceSystemdFlag (58.43s)

                                                
                                    
x
+
TestForceSystemdEnv (71.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-528720 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-528720 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.595481131s)
helpers_test.go:175: Cleaning up "force-systemd-env-528720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-528720
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-528720: (1.138646861s)
--- PASS: TestForceSystemdEnv (71.73s)

                                                
                                    
x
+
TestErrorSpam/setup (36.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-208563 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-208563 --driver=kvm2  --container-runtime=crio
E1124 13:22:03.779560  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:03.785955  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:03.797261  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:03.818611  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:03.859975  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:03.941440  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:04.102996  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:04.424746  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:05.066778  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:06.348380  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:08.911297  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:14.033650  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:22:24.276339  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-208563 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-208563 --driver=kvm2  --container-runtime=crio: (36.367373053s)
--- PASS: TestErrorSpam/setup (36.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 status
--- PASS: TestErrorSpam/status (0.68s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (5.05s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 stop: (1.810426219s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 stop: (1.770536982s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-208563 --log_dir /tmp/nospam-208563 stop: (1.467872164s)
--- PASS: TestErrorSpam/stop (5.05s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21932-132228/.minikube/files/etc/test/nested/copy/136268/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (83.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419891 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1124 13:22:44.758259  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:23:25.720955  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-419891 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m23.378882823s)
--- PASS: TestFunctional/serial/StartWithProxy (83.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 13:24:00.040617  136268 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419891 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-419891 --alsologtostderr -v=8: (43.132857632s)
functional_test.go:678: soft start took 43.133719502s for "functional-419891" cluster.
I1124 13:24:43.173833  136268 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (43.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-419891 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-419891 cache add registry.k8s.io/pause:3.1: (1.046397408s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-419891 cache add registry.k8s.io/pause:3.3: (1.021137631s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-419891 cache add registry.k8s.io/pause:latest: (1.037695999s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-419891 /tmp/TestFunctionalserialCacheCmdcacheadd_local3280798528/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 cache add minikube-local-cache-test:functional-419891
E1124 13:24:47.642279  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-419891 cache add minikube-local-cache-test:functional-419891: (1.897096802s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 cache delete minikube-local-cache-test:functional-419891
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-419891
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (173.670705ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 kubectl -- --context functional-419891 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-419891 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (58.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419891 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-419891 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (58.626508195s)
functional_test.go:776: restart took 58.626632564s for "functional-419891" cluster.
I1124 13:25:49.419727  136268 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (58.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-419891 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-419891 logs: (1.204570969s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 logs --file /tmp/TestFunctionalserialLogsFileCmd1847784050/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-419891 logs --file /tmp/TestFunctionalserialLogsFileCmd1847784050/001/logs.txt: (1.247635778s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-419891 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-419891
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-419891: exit status 115 (256.611861ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.4:31329 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-419891 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 config get cpus: exit status 14 (80.506557ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 config get cpus: exit status 14 (72.06782ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-419891 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-419891 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 142500: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419891 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-419891 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.389849ms)

                                                
                                                
-- stdout --
	* [functional-419891] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:25:57.590384  142361 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:25:57.590750  142361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:25:57.590765  142361 out.go:374] Setting ErrFile to fd 2...
	I1124 13:25:57.590772  142361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:25:57.591100  142361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 13:25:57.591781  142361 out.go:368] Setting JSON to false
	I1124 13:25:57.593065  142361 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4081,"bootTime":1763986677,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:25:57.593157  142361 start.go:143] virtualization: kvm guest
	I1124 13:25:57.594891  142361 out.go:179] * [functional-419891] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:25:57.596447  142361 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:25:57.596451  142361 notify.go:221] Checking for updates...
	I1124 13:25:57.598647  142361 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:25:57.599864  142361 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 13:25:57.601010  142361 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	I1124 13:25:57.602080  142361 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:25:57.604348  142361 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:25:57.606229  142361 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:25:57.606950  142361 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:25:57.641745  142361 out.go:179] * Using the kvm2 driver based on existing profile
	I1124 13:25:57.642939  142361 start.go:309] selected driver: kvm2
	I1124 13:25:57.642958  142361 start.go:927] validating driver "kvm2" against &{Name:functional-419891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-419891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:25:57.643102  142361 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:25:57.645187  142361 out.go:203] 
	W1124 13:25:57.646385  142361 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 13:25:57.647540  142361 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419891 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-419891 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-419891 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.726232ms)

                                                
                                                
-- stdout --
	* [functional-419891] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:25:57.456164  142336 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:25:57.456296  142336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:25:57.456308  142336 out.go:374] Setting ErrFile to fd 2...
	I1124 13:25:57.456316  142336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:25:57.456693  142336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 13:25:57.457183  142336 out.go:368] Setting JSON to false
	I1124 13:25:57.458275  142336 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4080,"bootTime":1763986677,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:25:57.458383  142336 start.go:143] virtualization: kvm guest
	I1124 13:25:57.460269  142336 out.go:179] * [functional-419891] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 13:25:57.461888  142336 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:25:57.461897  142336 notify.go:221] Checking for updates...
	I1124 13:25:57.465650  142336 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:25:57.466910  142336 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 13:25:57.468020  142336 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	I1124 13:25:57.469104  142336 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:25:57.470253  142336 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:25:57.472211  142336 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:25:57.473001  142336 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:25:57.509241  142336 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1124 13:25:57.510286  142336 start.go:309] selected driver: kvm2
	I1124 13:25:57.510302  142336 start.go:927] validating driver "kvm2" against &{Name:functional-419891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-419891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.4 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
tString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:25:57.510424  142336 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:25:57.512464  142336 out.go:203] 
	W1124 13:25:57.513537  142336 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 13:25:57.514465  142336 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-419891 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-419891 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-lc7b5" [220fc11a-bdfa-4fa2-a44a-28b9e1bd2cfb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-lc7b5" [220fc11a-bdfa-4fa2-a44a-28b9e1bd2cfb] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.417175332s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.4:32452
functional_test.go:1680: http://192.168.39.4:32452: success! body:
Request served by hello-node-connect-7d85dfc575-lc7b5

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.4:32452
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.21s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [a75ec760-5393-4617-82ca-ec202179a3c6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.007353581s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-419891 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-419891 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-419891 get pvc myclaim -o=json
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-419891 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-419891 apply -f testdata/storage-provisioner/pod.yaml
I1124 13:26:15.148339  136268 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c037dab7-bf06-40a3-bcae-2a1ee0e5d53f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c037dab7-bf06-40a3-bcae-2a1ee0e5d53f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.004139394s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-419891 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-419891 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-419891 delete -f testdata/storage-provisioner/pod.yaml: (1.079343664s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-419891 apply -f testdata/storage-provisioner/pod.yaml
I1124 13:26:43.464210  136268 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d1cc39db-679b-4005-a1e2-76456b7b02b5] Pending
helpers_test.go:352: "sp-pod" [d1cc39db-679b-4005-a1e2-76456b7b02b5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d1cc39db-679b-4005-a1e2-76456b7b02b5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004241422s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-419891 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh -n functional-419891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 cp functional-419891:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1057165587/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh -n functional-419891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh -n functional-419891 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-419891 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-9x6cp" [4db95528-c7b0-4eb4-80cc-280df073fecb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/11/24 13:26:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-9x6cp" [4db95528-c7b0-4eb4-80cc-280df073fecb] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.00468697s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-419891 exec mysql-5bb876957f-9x6cp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-419891 exec mysql-5bb876957f-9x6cp -- mysql -ppassword -e "show databases;": exit status 1 (126.101121ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 13:26:34.332632  136268 retry.go:31] will retry after 1.498710851s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-419891 exec mysql-5bb876957f-9x6cp -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-419891 exec mysql-5bb876957f-9x6cp -- mysql -ppassword -e "show databases;": exit status 1 (115.273138ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 13:26:35.947027  136268 retry.go:31] will retry after 1.565638505s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-419891 exec mysql-5bb876957f-9x6cp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/136268/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo cat /etc/test/nested/copy/136268/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/136268.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo cat /etc/ssl/certs/136268.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/136268.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo cat /usr/share/ca-certificates/136268.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1362682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo cat /etc/ssl/certs/1362682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1362682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo cat /usr/share/ca-certificates/1362682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-419891 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 ssh "sudo systemctl is-active docker": exit status 1 (177.952544ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 ssh "sudo systemctl is-active containerd": exit status 1 (184.997938ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-419891 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-419891 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-5nm29" [29da1bb1-2846-475e-bcaa-607a51f05376] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-5nm29" [29da1bb1-2846-475e-bcaa-607a51f05376] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.006932169s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "403.995254ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.954503ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdany-port444338728/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763990756640069846" to /tmp/TestFunctionalparallelMountCmdany-port444338728/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763990756640069846" to /tmp/TestFunctionalparallelMountCmdany-port444338728/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763990756640069846" to /tmp/TestFunctionalparallelMountCmdany-port444338728/001/test-1763990756640069846
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.639625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:25:56.871190  136268 retry.go:31] will retry after 491.758455ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 13:25 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 13:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 13:25 test-1763990756640069846
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh cat /mount-9p/test-1763990756640069846
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-419891 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [68455a1a-92f4-454f-a5c9-269edf30ec0c] Pending
helpers_test.go:352: "busybox-mount" [68455a1a-92f4-454f-a5c9-269edf30ec0c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [68455a1a-92f4-454f-a5c9-269edf30ec0c] Running
helpers_test.go:352: "busybox-mount" [68455a1a-92f4-454f-a5c9-269edf30ec0c] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [68455a1a-92f4-454f-a5c9-269edf30ec0c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004800391s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-419891 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdany-port444338728/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "255.262356ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.00093ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419891 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-419891
localhost/kicbase/echo-server:functional-419891
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419891 image ls --format short --alsologtostderr:
I1124 13:26:12.685069  143172 out.go:360] Setting OutFile to fd 1 ...
I1124 13:26:12.685340  143172 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:12.685349  143172 out.go:374] Setting ErrFile to fd 2...
I1124 13:26:12.685354  143172 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:12.685536  143172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
I1124 13:26:12.686063  143172 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:12.686174  143172 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:12.688186  143172 ssh_runner.go:195] Run: systemctl --version
I1124 13:26:12.690364  143172 main.go:143] libmachine: domain functional-419891 has defined MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:12.690794  143172 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:eb:8c", ip: ""} in network mk-functional-419891: {Iface:virbr1 ExpiryTime:2025-11-24 14:22:51 +0000 UTC Type:0 Mac:52:54:00:03:eb:8c Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-419891 Clientid:01:52:54:00:03:eb:8c}
I1124 13:26:12.690820  143172 main.go:143] libmachine: domain functional-419891 has defined IP address 192.168.39.4 and MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:12.690957  143172 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/functional-419891/id_rsa Username:docker}
I1124 13:26:12.772912  143172 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419891 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ localhost/minikube-local-cache-test     │ functional-419891  │ 521da8b34585b │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-419891  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419891 image ls --format table --alsologtostderr:
I1124 13:26:13.061162  143194 out.go:360] Setting OutFile to fd 1 ...
I1124 13:26:13.061450  143194 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:13.061461  143194 out.go:374] Setting ErrFile to fd 2...
I1124 13:26:13.061465  143194 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:13.061663  143194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
I1124 13:26:13.062265  143194 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:13.062362  143194 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:13.064375  143194 ssh_runner.go:195] Run: systemctl --version
I1124 13:26:13.066355  143194 main.go:143] libmachine: domain functional-419891 has defined MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:13.066791  143194 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:eb:8c", ip: ""} in network mk-functional-419891: {Iface:virbr1 ExpiryTime:2025-11-24 14:22:51 +0000 UTC Type:0 Mac:52:54:00:03:eb:8c Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-419891 Clientid:01:52:54:00:03:eb:8c}
I1124 13:26:13.066818  143194 main.go:143] libmachine: domain functional-419891 has defined IP address 192.168.39.4 and MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:13.066940  143194 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/functional-419891/id_rsa Username:docker}
I1124 13:26:13.148921  143194 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419891 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-419891"],"size":"4945146"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"521da8b34585bfb7956a37f9967463db36ad62001c343f8664bf064cb12721be","repoDigests":["localhost/minikube-local-cache-test@sha256:a1b8e9d5780f4c945a4cb95be80198c3244897d919bd716da6ace3e1ba61e540"],"repoTags":["localhost/minikube-local-cache-test:functional-419891"],"size":"3330"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71c
c79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8
s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f95
2adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k
8s.io/pause:latest"],"size":"247077"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419891 image ls --format json --alsologtostderr:
I1124 13:26:12.875902  143183 out.go:360] Setting OutFile to fd 1 ...
I1124 13:26:12.876204  143183 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:12.876214  143183 out.go:374] Setting ErrFile to fd 2...
I1124 13:26:12.876219  143183 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:12.876453  143183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
I1124 13:26:12.877050  143183 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:12.877182  143183 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:12.879356  143183 ssh_runner.go:195] Run: systemctl --version
I1124 13:26:12.881482  143183 main.go:143] libmachine: domain functional-419891 has defined MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:12.881889  143183 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:eb:8c", ip: ""} in network mk-functional-419891: {Iface:virbr1 ExpiryTime:2025-11-24 14:22:51 +0000 UTC Type:0 Mac:52:54:00:03:eb:8c Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-419891 Clientid:01:52:54:00:03:eb:8c}
I1124 13:26:12.881913  143183 main.go:143] libmachine: domain functional-419891 has defined IP address 192.168.39.4 and MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:12.882026  143183 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/functional-419891/id_rsa Username:docker}
I1124 13:26:12.959543  143183 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419891 image ls --format yaml --alsologtostderr:
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 521da8b34585bfb7956a37f9967463db36ad62001c343f8664bf064cb12721be
repoDigests:
- localhost/minikube-local-cache-test@sha256:a1b8e9d5780f4c945a4cb95be80198c3244897d919bd716da6ace3e1ba61e540
repoTags:
- localhost/minikube-local-cache-test:functional-419891
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-419891
size: "4945146"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419891 image ls --format yaml --alsologtostderr:
I1124 13:26:13.250681  143205 out.go:360] Setting OutFile to fd 1 ...
I1124 13:26:13.250955  143205 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:13.250965  143205 out.go:374] Setting ErrFile to fd 2...
I1124 13:26:13.250970  143205 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:13.251180  143205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
I1124 13:26:13.251728  143205 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:13.251830  143205 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:13.254156  143205 ssh_runner.go:195] Run: systemctl --version
I1124 13:26:13.256782  143205 main.go:143] libmachine: domain functional-419891 has defined MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:13.257230  143205 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:eb:8c", ip: ""} in network mk-functional-419891: {Iface:virbr1 ExpiryTime:2025-11-24 14:22:51 +0000 UTC Type:0 Mac:52:54:00:03:eb:8c Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-419891 Clientid:01:52:54:00:03:eb:8c}
I1124 13:26:13.257256  143205 main.go:143] libmachine: domain functional-419891 has defined IP address 192.168.39.4 and MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:13.257406  143205 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/functional-419891/id_rsa Username:docker}
I1124 13:26:13.338656  143205 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 ssh pgrep buildkitd: exit status 1 (203.191414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image build -t localhost/my-image:functional-419891 testdata/build --alsologtostderr
I1124 13:26:13.657958  136268 retry.go:31] will retry after 1.299952928s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:80f25ed9-6b69-4702-8c34-063589346570 ResourceVersion:848 Generation:0 CreationTimestamp:2025-11-24 13:26:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001efc210 VolumeMode:0xc001efc220 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-419891 image build -t localhost/my-image:functional-419891 testdata/build --alsologtostderr: (3.670559556s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-419891 image build -t localhost/my-image:functional-419891 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e83e9b0249c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-419891
--> 07639fe6088
Successfully tagged localhost/my-image:functional-419891
07639fe6088283d77422c6db48e61c31122988a73aa71741eb29026c93325950
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-419891 image build -t localhost/my-image:functional-419891 testdata/build --alsologtostderr:
I1124 13:26:13.682809  143256 out.go:360] Setting OutFile to fd 1 ...
I1124 13:26:13.683131  143256 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:13.683142  143256 out.go:374] Setting ErrFile to fd 2...
I1124 13:26:13.683149  143256 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:26:13.683403  143256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
I1124 13:26:13.684011  143256 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:13.684939  143256 config.go:182] Loaded profile config "functional-419891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:26:13.687556  143256 ssh_runner.go:195] Run: systemctl --version
I1124 13:26:13.689858  143256 main.go:143] libmachine: domain functional-419891 has defined MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:13.690398  143256 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:03:eb:8c", ip: ""} in network mk-functional-419891: {Iface:virbr1 ExpiryTime:2025-11-24 14:22:51 +0000 UTC Type:0 Mac:52:54:00:03:eb:8c Iaid: IPaddr:192.168.39.4 Prefix:24 Hostname:functional-419891 Clientid:01:52:54:00:03:eb:8c}
I1124 13:26:13.690438  143256 main.go:143] libmachine: domain functional-419891 has defined IP address 192.168.39.4 and MAC address 52:54:00:03:eb:8c in network mk-functional-419891
I1124 13:26:13.690620  143256 sshutil.go:53] new ssh client: &{IP:192.168.39.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/functional-419891/id_rsa Username:docker}
I1124 13:26:13.791343  143256 build_images.go:162] Building image from path: /tmp/build.2573674337.tar
I1124 13:26:13.791437  143256 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 13:26:13.817178  143256 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2573674337.tar
I1124 13:26:13.823867  143256 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2573674337.tar: stat -c "%s %y" /var/lib/minikube/build/build.2573674337.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2573674337.tar': No such file or directory
I1124 13:26:13.823906  143256 ssh_runner.go:362] scp /tmp/build.2573674337.tar --> /var/lib/minikube/build/build.2573674337.tar (3072 bytes)
I1124 13:26:13.878508  143256 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2573674337
I1124 13:26:13.898725  143256 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2573674337 -xf /var/lib/minikube/build/build.2573674337.tar
I1124 13:26:13.912262  143256 crio.go:315] Building image: /var/lib/minikube/build/build.2573674337
I1124 13:26:13.912338  143256 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-419891 /var/lib/minikube/build/build.2573674337 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1124 13:26:17.234221  143256 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-419891 /var/lib/minikube/build/build.2573674337 --cgroup-manager=cgroupfs: (3.321843292s)
I1124 13:26:17.234375  143256 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2573674337
I1124 13:26:17.249139  143256 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2573674337.tar
I1124 13:26:17.262881  143256 build_images.go:218] Built localhost/my-image:functional-419891 from /tmp/build.2573674337.tar
I1124 13:26:17.262937  143256 build_images.go:134] succeeded building to: functional-419891
I1124 13:26:17.262945  143256 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.980737143s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-419891
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image load --daemon kicbase/echo-server:functional-419891 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image load --daemon kicbase/echo-server:functional-419891 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-419891
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image load --daemon kicbase/echo-server:functional-419891 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image save kicbase/echo-server:functional-419891 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image rm kicbase/echo-server:functional-419891 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 service list -o json
functional_test.go:1504: Took "404.961021ms" to run "out/minikube-linux-amd64 -p functional-419891 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.4:31608
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-419891
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 image save --daemon kicbase/echo-server:functional-419891 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-419891
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.4:31608
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdspecific-port2649701330/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.442949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:26:07.129642  136268 retry.go:31] will retry after 407.877013ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdspecific-port2649701330/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 ssh "sudo umount -f /mount-9p": exit status 1 (190.991616ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-419891 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdspecific-port2649701330/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1541671945/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1541671945/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1541671945/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T" /mount1: exit status 1 (253.676536ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:26:08.578074  136268 retry.go:31] will retry after 381.226505ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-419891 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1541671945/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1541671945/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-419891 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1541671945/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-419891 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-419891
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-419891
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-419891
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1124 13:27:03.771151  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:27:31.484266  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m20.138767585s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (200.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 kubectl -- rollout status deployment/busybox: (6.805796841s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-h8lbn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wk7s6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wvbmm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-h8lbn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wk7s6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wvbmm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-h8lbn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wk7s6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wvbmm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-h8lbn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-h8lbn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wk7s6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wk7s6 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wvbmm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 kubectl -- exec busybox-7b57f96db7-wvbmm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 node add --alsologtostderr -v 5
E1124 13:30:56.377742  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:56.384196  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:56.395595  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:56.417073  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:56.458596  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:56.540091  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:56.701740  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:57.023196  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:57.665339  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:58.946985  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:31:01.508614  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:31:06.630792  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 node add --alsologtostderr -v 5: (43.501398191s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-613814 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp testdata/cp-test.txt ha-613814:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3314166638/001/cp-test_ha-613814.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814:/home/docker/cp-test.txt ha-613814-m02:/home/docker/cp-test_ha-613814_ha-613814-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m02 "sudo cat /home/docker/cp-test_ha-613814_ha-613814-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814:/home/docker/cp-test.txt ha-613814-m03:/home/docker/cp-test_ha-613814_ha-613814-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m03 "sudo cat /home/docker/cp-test_ha-613814_ha-613814-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814:/home/docker/cp-test.txt ha-613814-m04:/home/docker/cp-test_ha-613814_ha-613814-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m04 "sudo cat /home/docker/cp-test_ha-613814_ha-613814-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp testdata/cp-test.txt ha-613814-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3314166638/001/cp-test_ha-613814-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m02:/home/docker/cp-test.txt ha-613814:/home/docker/cp-test_ha-613814-m02_ha-613814.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814 "sudo cat /home/docker/cp-test_ha-613814-m02_ha-613814.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m02:/home/docker/cp-test.txt ha-613814-m03:/home/docker/cp-test_ha-613814-m02_ha-613814-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m03 "sudo cat /home/docker/cp-test_ha-613814-m02_ha-613814-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m02:/home/docker/cp-test.txt ha-613814-m04:/home/docker/cp-test_ha-613814-m02_ha-613814-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m04 "sudo cat /home/docker/cp-test_ha-613814-m02_ha-613814-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp testdata/cp-test.txt ha-613814-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3314166638/001/cp-test_ha-613814-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m03:/home/docker/cp-test.txt ha-613814:/home/docker/cp-test_ha-613814-m03_ha-613814.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814 "sudo cat /home/docker/cp-test_ha-613814-m03_ha-613814.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m03:/home/docker/cp-test.txt ha-613814-m02:/home/docker/cp-test_ha-613814-m03_ha-613814-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m02 "sudo cat /home/docker/cp-test_ha-613814-m03_ha-613814-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m03:/home/docker/cp-test.txt ha-613814-m04:/home/docker/cp-test_ha-613814-m03_ha-613814-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m04 "sudo cat /home/docker/cp-test_ha-613814-m03_ha-613814-m04.txt"
E1124 13:31:16.872894  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp testdata/cp-test.txt ha-613814-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3314166638/001/cp-test_ha-613814-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m04:/home/docker/cp-test.txt ha-613814:/home/docker/cp-test_ha-613814-m04_ha-613814.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814 "sudo cat /home/docker/cp-test_ha-613814-m04_ha-613814.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m04:/home/docker/cp-test.txt ha-613814-m02:/home/docker/cp-test_ha-613814-m04_ha-613814-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m02 "sudo cat /home/docker/cp-test_ha-613814-m04_ha-613814-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 cp ha-613814-m04:/home/docker/cp-test.txt ha-613814-m03:/home/docker/cp-test_ha-613814-m04_ha-613814-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 ssh -n ha-613814-m03 "sudo cat /home/docker/cp-test_ha-613814-m04_ha-613814-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (88.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 node stop m02 --alsologtostderr -v 5
E1124 13:31:37.354508  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:32:03.770623  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:32:18.316314  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 node stop m02 --alsologtostderr -v 5: (1m28.372801515s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5: exit status 7 (488.386423ms)

                                                
                                                
-- stdout --
	ha-613814
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-613814-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-613814-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-613814-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:32:47.967030  146518 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:32:47.967393  146518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:32:47.967406  146518 out.go:374] Setting ErrFile to fd 2...
	I1124 13:32:47.967414  146518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:32:47.967637  146518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 13:32:47.967839  146518 out.go:368] Setting JSON to false
	I1124 13:32:47.967880  146518 mustload.go:66] Loading cluster: ha-613814
	I1124 13:32:47.968006  146518 notify.go:221] Checking for updates...
	I1124 13:32:47.968375  146518 config.go:182] Loaded profile config "ha-613814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:32:47.968401  146518 status.go:174] checking status of ha-613814 ...
	I1124 13:32:47.970529  146518 status.go:371] ha-613814 host status = "Running" (err=<nil>)
	I1124 13:32:47.970549  146518 host.go:66] Checking if "ha-613814" exists ...
	I1124 13:32:47.972969  146518 main.go:143] libmachine: domain ha-613814 has defined MAC address 52:54:00:aa:8b:fe in network mk-ha-613814
	I1124 13:32:47.973544  146518 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:8b:fe", ip: ""} in network mk-ha-613814: {Iface:virbr1 ExpiryTime:2025-11-24 14:27:07 +0000 UTC Type:0 Mac:52:54:00:aa:8b:fe Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-613814 Clientid:01:52:54:00:aa:8b:fe}
	I1124 13:32:47.973585  146518 main.go:143] libmachine: domain ha-613814 has defined IP address 192.168.39.157 and MAC address 52:54:00:aa:8b:fe in network mk-ha-613814
	I1124 13:32:47.973741  146518 host.go:66] Checking if "ha-613814" exists ...
	I1124 13:32:47.973934  146518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:32:47.976197  146518 main.go:143] libmachine: domain ha-613814 has defined MAC address 52:54:00:aa:8b:fe in network mk-ha-613814
	I1124 13:32:47.976610  146518 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:8b:fe", ip: ""} in network mk-ha-613814: {Iface:virbr1 ExpiryTime:2025-11-24 14:27:07 +0000 UTC Type:0 Mac:52:54:00:aa:8b:fe Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-613814 Clientid:01:52:54:00:aa:8b:fe}
	I1124 13:32:47.976635  146518 main.go:143] libmachine: domain ha-613814 has defined IP address 192.168.39.157 and MAC address 52:54:00:aa:8b:fe in network mk-ha-613814
	I1124 13:32:47.976783  146518 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/ha-613814/id_rsa Username:docker}
	I1124 13:32:48.061365  146518 ssh_runner.go:195] Run: systemctl --version
	I1124 13:32:48.070202  146518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:32:48.089675  146518 kubeconfig.go:125] found "ha-613814" server: "https://192.168.39.254:8443"
	I1124 13:32:48.089719  146518 api_server.go:166] Checking apiserver status ...
	I1124 13:32:48.089773  146518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:32:48.110420  146518 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup
	W1124 13:32:48.123323  146518 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1393/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:32:48.123391  146518 ssh_runner.go:195] Run: ls
	I1124 13:32:48.129540  146518 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1124 13:32:48.134835  146518 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1124 13:32:48.134868  146518 status.go:463] ha-613814 apiserver status = Running (err=<nil>)
	I1124 13:32:48.134883  146518 status.go:176] ha-613814 status: &{Name:ha-613814 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:32:48.134911  146518 status.go:174] checking status of ha-613814-m02 ...
	I1124 13:32:48.136641  146518 status.go:371] ha-613814-m02 host status = "Stopped" (err=<nil>)
	I1124 13:32:48.136661  146518 status.go:384] host is not running, skipping remaining checks
	I1124 13:32:48.136669  146518 status.go:176] ha-613814-m02 status: &{Name:ha-613814-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:32:48.136690  146518 status.go:174] checking status of ha-613814-m03 ...
	I1124 13:32:48.138036  146518 status.go:371] ha-613814-m03 host status = "Running" (err=<nil>)
	I1124 13:32:48.138055  146518 host.go:66] Checking if "ha-613814-m03" exists ...
	I1124 13:32:48.140617  146518 main.go:143] libmachine: domain ha-613814-m03 has defined MAC address 52:54:00:2f:41:38 in network mk-ha-613814
	I1124 13:32:48.141058  146518 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2f:41:38", ip: ""} in network mk-ha-613814: {Iface:virbr1 ExpiryTime:2025-11-24 14:29:04 +0000 UTC Type:0 Mac:52:54:00:2f:41:38 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-613814-m03 Clientid:01:52:54:00:2f:41:38}
	I1124 13:32:48.141085  146518 main.go:143] libmachine: domain ha-613814-m03 has defined IP address 192.168.39.98 and MAC address 52:54:00:2f:41:38 in network mk-ha-613814
	I1124 13:32:48.141283  146518 host.go:66] Checking if "ha-613814-m03" exists ...
	I1124 13:32:48.141529  146518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:32:48.143934  146518 main.go:143] libmachine: domain ha-613814-m03 has defined MAC address 52:54:00:2f:41:38 in network mk-ha-613814
	I1124 13:32:48.144339  146518 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2f:41:38", ip: ""} in network mk-ha-613814: {Iface:virbr1 ExpiryTime:2025-11-24 14:29:04 +0000 UTC Type:0 Mac:52:54:00:2f:41:38 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:ha-613814-m03 Clientid:01:52:54:00:2f:41:38}
	I1124 13:32:48.144364  146518 main.go:143] libmachine: domain ha-613814-m03 has defined IP address 192.168.39.98 and MAC address 52:54:00:2f:41:38 in network mk-ha-613814
	I1124 13:32:48.144514  146518 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/ha-613814-m03/id_rsa Username:docker}
	I1124 13:32:48.224352  146518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:32:48.242160  146518 kubeconfig.go:125] found "ha-613814" server: "https://192.168.39.254:8443"
	I1124 13:32:48.242197  146518 api_server.go:166] Checking apiserver status ...
	I1124 13:32:48.242263  146518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:32:48.261099  146518 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1760/cgroup
	W1124 13:32:48.274382  146518 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1760/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:32:48.274455  146518 ssh_runner.go:195] Run: ls
	I1124 13:32:48.279175  146518 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1124 13:32:48.283650  146518 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1124 13:32:48.283674  146518 status.go:463] ha-613814-m03 apiserver status = Running (err=<nil>)
	I1124 13:32:48.283686  146518 status.go:176] ha-613814-m03 status: &{Name:ha-613814-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:32:48.283708  146518 status.go:174] checking status of ha-613814-m04 ...
	I1124 13:32:48.285243  146518 status.go:371] ha-613814-m04 host status = "Running" (err=<nil>)
	I1124 13:32:48.285262  146518 host.go:66] Checking if "ha-613814-m04" exists ...
	I1124 13:32:48.287814  146518 main.go:143] libmachine: domain ha-613814-m04 has defined MAC address 52:54:00:21:99:0b in network mk-ha-613814
	I1124 13:32:48.288204  146518 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:99:0b", ip: ""} in network mk-ha-613814: {Iface:virbr1 ExpiryTime:2025-11-24 14:30:39 +0000 UTC Type:0 Mac:52:54:00:21:99:0b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-613814-m04 Clientid:01:52:54:00:21:99:0b}
	I1124 13:32:48.288232  146518 main.go:143] libmachine: domain ha-613814-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:21:99:0b in network mk-ha-613814
	I1124 13:32:48.288375  146518 host.go:66] Checking if "ha-613814-m04" exists ...
	I1124 13:32:48.288581  146518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:32:48.291042  146518 main.go:143] libmachine: domain ha-613814-m04 has defined MAC address 52:54:00:21:99:0b in network mk-ha-613814
	I1124 13:32:48.291480  146518 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:21:99:0b", ip: ""} in network mk-ha-613814: {Iface:virbr1 ExpiryTime:2025-11-24 14:30:39 +0000 UTC Type:0 Mac:52:54:00:21:99:0b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-613814-m04 Clientid:01:52:54:00:21:99:0b}
	I1124 13:32:48.291506  146518 main.go:143] libmachine: domain ha-613814-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:21:99:0b in network mk-ha-613814
	I1124 13:32:48.291658  146518 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/ha-613814-m04/id_rsa Username:docker}
	I1124 13:32:48.374884  146518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:32:48.392077  146518 status.go:176] ha-613814-m04 status: &{Name:ha-613814-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (88.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 node start m02 --alsologtostderr -v 5: (34.733599794s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 stop --alsologtostderr -v 5
E1124 13:33:40.238123  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:35:56.378195  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:24.079572  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:37:03.770260  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 stop --alsologtostderr -v 5: (4m7.870238388s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 start --wait true --alsologtostderr -v 5
E1124 13:38:26.846540  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 start --wait true --alsologtostderr -v 5: (1m58.087426272s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 node delete m03 --alsologtostderr -v 5: (18.385999943s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (251.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 stop --alsologtostderr -v 5
E1124 13:40:56.376992  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:42:03.771358  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 stop --alsologtostderr -v 5: (4m11.80005897s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5: exit status 7 (65.069945ms)

                                                
                                                
-- stdout --
	ha-613814
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-613814-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-613814-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:44:02.612019  149885 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:44:02.612323  149885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:44:02.612334  149885 out.go:374] Setting ErrFile to fd 2...
	I1124 13:44:02.612340  149885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:44:02.612533  149885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 13:44:02.612734  149885 out.go:368] Setting JSON to false
	I1124 13:44:02.612773  149885 mustload.go:66] Loading cluster: ha-613814
	I1124 13:44:02.612903  149885 notify.go:221] Checking for updates...
	I1124 13:44:02.613215  149885 config.go:182] Loaded profile config "ha-613814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:44:02.613240  149885 status.go:174] checking status of ha-613814 ...
	I1124 13:44:02.615240  149885 status.go:371] ha-613814 host status = "Stopped" (err=<nil>)
	I1124 13:44:02.615254  149885 status.go:384] host is not running, skipping remaining checks
	I1124 13:44:02.615260  149885 status.go:176] ha-613814 status: &{Name:ha-613814 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:44:02.615279  149885 status.go:174] checking status of ha-613814-m02 ...
	I1124 13:44:02.616415  149885 status.go:371] ha-613814-m02 host status = "Stopped" (err=<nil>)
	I1124 13:44:02.616432  149885 status.go:384] host is not running, skipping remaining checks
	I1124 13:44:02.616438  149885 status.go:176] ha-613814-m02 status: &{Name:ha-613814-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:44:02.616463  149885 status.go:174] checking status of ha-613814-m04 ...
	I1124 13:44:02.617561  149885 status.go:371] ha-613814-m04 host status = "Stopped" (err=<nil>)
	I1124 13:44:02.617575  149885 status.go:384] host is not running, skipping remaining checks
	I1124 13:44:02.617580  149885 status.go:176] ha-613814-m04 status: &{Name:ha-613814-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (251.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m17.082491257s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 node add --control-plane --alsologtostderr -v 5
E1124 13:45:56.377318  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-613814 node add --control-plane --alsologtostderr -v 5: (1m20.386107246s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-613814 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-049520 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1124 13:47:03.771427  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:47:19.441438  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-049520 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.843346707s)
--- PASS: TestJSONOutput/start/Command (74.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-049520 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-049520 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-049520 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-049520 --output=json --user=testUser: (6.835356334s)
--- PASS: TestJSONOutput/stop/Command (6.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-100762 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-100762 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (81.735702ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"44b35e4c-af70-47e4-8edc-ac0c24485e2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-100762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"15285dd6-81e7-481a-831c-e82933a080fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21932"}}
	{"specversion":"1.0","id":"e1e41cf9-711f-459c-acb1-c4518e82166c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"845e5019-44ce-4995-8c21-1b631a275c51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig"}}
	{"specversion":"1.0","id":"c9146c2c-83b3-4a3e-9f7c-929a31c51f0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube"}}
	{"specversion":"1.0","id":"49905cbe-f345-40d3-8f14-e759fcb86245","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cb57ebce-f3fa-4cdd-a4da-9f40c66b52dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e670d28c-71ab-4d13-8425-3c4a9f2ead59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-100762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-100762
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (75.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-799335 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-799335 --driver=kvm2  --container-runtime=crio: (36.165164908s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-802902 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-802902 --driver=kvm2  --container-runtime=crio: (36.255358725s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-799335
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-802902
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-802902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-802902
helpers_test.go:175: Cleaning up "first-799335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-799335
--- PASS: TestMinikubeProfile (75.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-494079 --memory=3072 --mount-string /tmp/TestMountStartserial96222823/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-494079 --memory=3072 --mount-string /tmp/TestMountStartserial96222823/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.204249869s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-494079 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-494079 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-513747 --memory=3072 --mount-string /tmp/TestMountStartserial96222823/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-513747 --memory=3072 --mount-string /tmp/TestMountStartserial96222823/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.155179086s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513747 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513747 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-494079 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513747 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513747 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.33s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-513747
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-513747: (1.265418602s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-513747
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-513747: (17.542200695s)
--- PASS: TestMountStart/serial/RestartStopped (18.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513747 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-513747 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-037620 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1124 13:50:56.378416  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:52:03.770894  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-037620 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m36.233121822s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-037620 -- rollout status deployment/busybox: (4.560870706s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-hc6bl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-p57vm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-hc6bl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-p57vm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-hc6bl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-p57vm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-hc6bl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-hc6bl -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-p57vm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-037620 -- exec busybox-7b57f96db7-p57vm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-037620 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-037620 -v=5 --alsologtostderr: (40.756707716s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.17s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-037620 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp testdata/cp-test.txt multinode-037620:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp multinode-037620:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3701185589/001/cp-test_multinode-037620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp multinode-037620:/home/docker/cp-test.txt multinode-037620-m02:/home/docker/cp-test_multinode-037620_multinode-037620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m02 "sudo cat /home/docker/cp-test_multinode-037620_multinode-037620-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp multinode-037620:/home/docker/cp-test.txt multinode-037620-m03:/home/docker/cp-test_multinode-037620_multinode-037620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m03 "sudo cat /home/docker/cp-test_multinode-037620_multinode-037620-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp testdata/cp-test.txt multinode-037620-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp multinode-037620-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3701185589/001/cp-test_multinode-037620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp multinode-037620-m02:/home/docker/cp-test.txt multinode-037620:/home/docker/cp-test_multinode-037620-m02_multinode-037620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620 "sudo cat /home/docker/cp-test_multinode-037620-m02_multinode-037620.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp multinode-037620-m02:/home/docker/cp-test.txt multinode-037620-m03:/home/docker/cp-test_multinode-037620-m02_multinode-037620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m03 "sudo cat /home/docker/cp-test_multinode-037620-m02_multinode-037620-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp testdata/cp-test.txt multinode-037620-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp multinode-037620-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3701185589/001/cp-test_multinode-037620-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp multinode-037620-m03:/home/docker/cp-test.txt multinode-037620:/home/docker/cp-test_multinode-037620-m03_multinode-037620.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620 "sudo cat /home/docker/cp-test_multinode-037620-m03_multinode-037620.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 cp multinode-037620-m03:/home/docker/cp-test.txt multinode-037620-m02:/home/docker/cp-test_multinode-037620-m03_multinode-037620-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 ssh -n multinode-037620-m02 "sudo cat /home/docker/cp-test_multinode-037620-m03_multinode-037620-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-037620 node stop m03: (1.715058698s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-037620 status: exit status 7 (311.085116ms)

                                                
                                                
-- stdout --
	multinode-037620
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-037620-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-037620-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-037620 status --alsologtostderr: exit status 7 (308.931361ms)

                                                
                                                
-- stdout --
	multinode-037620
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-037620-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-037620-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:53:01.216250  155275 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:53:01.216355  155275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:53:01.216363  155275 out.go:374] Setting ErrFile to fd 2...
	I1124 13:53:01.216367  155275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:53:01.216687  155275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 13:53:01.216915  155275 out.go:368] Setting JSON to false
	I1124 13:53:01.216961  155275 mustload.go:66] Loading cluster: multinode-037620
	I1124 13:53:01.217081  155275 notify.go:221] Checking for updates...
	I1124 13:53:01.217454  155275 config.go:182] Loaded profile config "multinode-037620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:53:01.217484  155275 status.go:174] checking status of multinode-037620 ...
	I1124 13:53:01.219643  155275 status.go:371] multinode-037620 host status = "Running" (err=<nil>)
	I1124 13:53:01.219658  155275 host.go:66] Checking if "multinode-037620" exists ...
	I1124 13:53:01.222415  155275 main.go:143] libmachine: domain multinode-037620 has defined MAC address 52:54:00:1a:6d:d3 in network mk-multinode-037620
	I1124 13:53:01.222862  155275 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1a:6d:d3", ip: ""} in network mk-multinode-037620: {Iface:virbr1 ExpiryTime:2025-11-24 14:50:43 +0000 UTC Type:0 Mac:52:54:00:1a:6d:d3 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:multinode-037620 Clientid:01:52:54:00:1a:6d:d3}
	I1124 13:53:01.222897  155275 main.go:143] libmachine: domain multinode-037620 has defined IP address 192.168.39.12 and MAC address 52:54:00:1a:6d:d3 in network mk-multinode-037620
	I1124 13:53:01.223044  155275 host.go:66] Checking if "multinode-037620" exists ...
	I1124 13:53:01.223328  155275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:53:01.225678  155275 main.go:143] libmachine: domain multinode-037620 has defined MAC address 52:54:00:1a:6d:d3 in network mk-multinode-037620
	I1124 13:53:01.226184  155275 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1a:6d:d3", ip: ""} in network mk-multinode-037620: {Iface:virbr1 ExpiryTime:2025-11-24 14:50:43 +0000 UTC Type:0 Mac:52:54:00:1a:6d:d3 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:multinode-037620 Clientid:01:52:54:00:1a:6d:d3}
	I1124 13:53:01.226216  155275 main.go:143] libmachine: domain multinode-037620 has defined IP address 192.168.39.12 and MAC address 52:54:00:1a:6d:d3 in network mk-multinode-037620
	I1124 13:53:01.226388  155275 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/multinode-037620/id_rsa Username:docker}
	I1124 13:53:01.306378  155275 ssh_runner.go:195] Run: systemctl --version
	I1124 13:53:01.312001  155275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:53:01.327084  155275 kubeconfig.go:125] found "multinode-037620" server: "https://192.168.39.12:8443"
	I1124 13:53:01.327135  155275 api_server.go:166] Checking apiserver status ...
	I1124 13:53:01.327175  155275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:53:01.344561  155275 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1371/cgroup
	W1124 13:53:01.355002  155275 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1371/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:53:01.355052  155275 ssh_runner.go:195] Run: ls
	I1124 13:53:01.359485  155275 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I1124 13:53:01.364248  155275 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I1124 13:53:01.364270  155275 status.go:463] multinode-037620 apiserver status = Running (err=<nil>)
	I1124 13:53:01.364283  155275 status.go:176] multinode-037620 status: &{Name:multinode-037620 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:53:01.364312  155275 status.go:174] checking status of multinode-037620-m02 ...
	I1124 13:53:01.365825  155275 status.go:371] multinode-037620-m02 host status = "Running" (err=<nil>)
	I1124 13:53:01.365842  155275 host.go:66] Checking if "multinode-037620-m02" exists ...
	I1124 13:53:01.367902  155275 main.go:143] libmachine: domain multinode-037620-m02 has defined MAC address 52:54:00:2a:04:dc in network mk-multinode-037620
	I1124 13:53:01.368282  155275 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:04:dc", ip: ""} in network mk-multinode-037620: {Iface:virbr1 ExpiryTime:2025-11-24 14:51:36 +0000 UTC Type:0 Mac:52:54:00:2a:04:dc Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:multinode-037620-m02 Clientid:01:52:54:00:2a:04:dc}
	I1124 13:53:01.368305  155275 main.go:143] libmachine: domain multinode-037620-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:2a:04:dc in network mk-multinode-037620
	I1124 13:53:01.368404  155275 host.go:66] Checking if "multinode-037620-m02" exists ...
	I1124 13:53:01.368622  155275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:53:01.370418  155275 main.go:143] libmachine: domain multinode-037620-m02 has defined MAC address 52:54:00:2a:04:dc in network mk-multinode-037620
	I1124 13:53:01.370760  155275 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2a:04:dc", ip: ""} in network mk-multinode-037620: {Iface:virbr1 ExpiryTime:2025-11-24 14:51:36 +0000 UTC Type:0 Mac:52:54:00:2a:04:dc Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:multinode-037620-m02 Clientid:01:52:54:00:2a:04:dc}
	I1124 13:53:01.370782  155275 main.go:143] libmachine: domain multinode-037620-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:2a:04:dc in network mk-multinode-037620
	I1124 13:53:01.370890  155275 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21932-132228/.minikube/machines/multinode-037620-m02/id_rsa Username:docker}
	I1124 13:53:01.447859  155275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:53:01.463024  155275 status.go:176] multinode-037620-m02 status: &{Name:multinode-037620-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:53:01.463065  155275 status.go:174] checking status of multinode-037620-m03 ...
	I1124 13:53:01.464779  155275 status.go:371] multinode-037620-m03 host status = "Stopped" (err=<nil>)
	I1124 13:53:01.464793  155275 status.go:384] host is not running, skipping remaining checks
	I1124 13:53:01.464798  155275 status.go:176] multinode-037620-m03 status: &{Name:multinode-037620-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-037620 node start m03 -v=5 --alsologtostderr: (36.210784335s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (283.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-037620
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-037620
E1124 13:55:06.847919  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:55:56.378080  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-037620: (2m43.474597688s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-037620 --wait=true -v=5 --alsologtostderr
E1124 13:57:03.771152  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-037620 --wait=true -v=5 --alsologtostderr: (2m0.201433575s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-037620
--- PASS: TestMultiNode/serial/RestartKeepsNodes (283.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-037620 node delete m03: (2.164789386s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (166.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 stop
E1124 14:00:56.377430  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-037620 stop: (2m46.127123875s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-037620 status: exit status 7 (68.484185ms)

                                                
                                                
-- stdout --
	multinode-037620
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-037620-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-037620 status --alsologtostderr: exit status 7 (65.991557ms)

                                                
                                                
-- stdout --
	multinode-037620
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-037620-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:01:10.853596  157623 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:01:10.853705  157623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:10.853714  157623 out.go:374] Setting ErrFile to fd 2...
	I1124 14:01:10.853718  157623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:10.853922  157623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 14:01:10.854173  157623 out.go:368] Setting JSON to false
	I1124 14:01:10.854202  157623 mustload.go:66] Loading cluster: multinode-037620
	I1124 14:01:10.854365  157623 notify.go:221] Checking for updates...
	I1124 14:01:10.854608  157623 config.go:182] Loaded profile config "multinode-037620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:10.854622  157623 status.go:174] checking status of multinode-037620 ...
	I1124 14:01:10.856739  157623 status.go:371] multinode-037620 host status = "Stopped" (err=<nil>)
	I1124 14:01:10.856757  157623 status.go:384] host is not running, skipping remaining checks
	I1124 14:01:10.856762  157623 status.go:176] multinode-037620 status: &{Name:multinode-037620 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 14:01:10.856779  157623 status.go:174] checking status of multinode-037620-m02 ...
	I1124 14:01:10.857906  157623 status.go:371] multinode-037620-m02 host status = "Stopped" (err=<nil>)
	I1124 14:01:10.857919  157623 status.go:384] host is not running, skipping remaining checks
	I1124 14:01:10.857924  157623 status.go:176] multinode-037620-m02 status: &{Name:multinode-037620-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (166.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (113.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-037620 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1124 14:02:03.771804  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-037620 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.430157551s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-037620 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (113.89s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-037620
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-037620-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-037620-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (76.886024ms)

                                                
                                                
-- stdout --
	* [multinode-037620-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-037620-m02' is duplicated with machine name 'multinode-037620-m02' in profile 'multinode-037620'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-037620-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-037620-m03 --driver=kvm2  --container-runtime=crio: (39.830836509s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-037620
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-037620: exit status 80 (202.058886ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-037620 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-037620-m03 already exists in multinode-037620-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-037620-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.02s)

                                                
                                    
x
+
TestScheduledStopUnix (107.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-400078 --memory=3072 --driver=kvm2  --container-runtime=crio
E1124 14:07:03.776555  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-400078 --memory=3072 --driver=kvm2  --container-runtime=crio: (36.305670399s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-400078 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 14:07:06.246719  160083 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:07:06.247052  160083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:07:06.247067  160083 out.go:374] Setting ErrFile to fd 2...
	I1124 14:07:06.247073  160083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:07:06.247459  160083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 14:07:06.247843  160083 out.go:368] Setting JSON to false
	I1124 14:07:06.247982  160083 mustload.go:66] Loading cluster: scheduled-stop-400078
	I1124 14:07:06.248492  160083 config.go:182] Loaded profile config "scheduled-stop-400078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:07:06.248621  160083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/config.json ...
	I1124 14:07:06.248932  160083 mustload.go:66] Loading cluster: scheduled-stop-400078
	I1124 14:07:06.249089  160083 config.go:182] Loaded profile config "scheduled-stop-400078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-400078 -n scheduled-stop-400078
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-400078 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 14:07:06.537669  160144 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:07:06.537964  160144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:07:06.537974  160144 out.go:374] Setting ErrFile to fd 2...
	I1124 14:07:06.537978  160144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:07:06.538598  160144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 14:07:06.539014  160144 out.go:368] Setting JSON to false
	I1124 14:07:06.539349  160144 daemonize_unix.go:73] killing process 160133 as it is an old scheduled stop
	I1124 14:07:06.539621  160144 mustload.go:66] Loading cluster: scheduled-stop-400078
	I1124 14:07:06.540057  160144 config.go:182] Loaded profile config "scheduled-stop-400078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:07:06.540148  160144 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/config.json ...
	I1124 14:07:06.540337  160144 mustload.go:66] Loading cluster: scheduled-stop-400078
	I1124 14:07:06.540437  160144 config.go:182] Loaded profile config "scheduled-stop-400078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 14:07:06.545877  136268 retry.go:31] will retry after 99.707µs: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.547047  136268 retry.go:31] will retry after 92.437µs: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.548178  136268 retry.go:31] will retry after 305.388µs: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.549312  136268 retry.go:31] will retry after 429.914µs: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.550443  136268 retry.go:31] will retry after 747.442µs: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.551588  136268 retry.go:31] will retry after 641.546µs: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.552710  136268 retry.go:31] will retry after 1.574925ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.554918  136268 retry.go:31] will retry after 2.146037ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.558122  136268 retry.go:31] will retry after 3.395154ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.562315  136268 retry.go:31] will retry after 4.189545ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.567540  136268 retry.go:31] will retry after 4.769533ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.572788  136268 retry.go:31] will retry after 10.317465ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.584104  136268 retry.go:31] will retry after 8.011852ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.592398  136268 retry.go:31] will retry after 28.547008ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.621665  136268 retry.go:31] will retry after 16.322057ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
I1124 14:07:06.638976  136268 retry.go:31] will retry after 32.421578ms: open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-400078 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-400078 -n scheduled-stop-400078
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-400078
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-400078 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 14:07:32.245589  160293 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:07:32.245683  160293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:07:32.245691  160293 out.go:374] Setting ErrFile to fd 2...
	I1124 14:07:32.245695  160293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:07:32.245879  160293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 14:07:32.246103  160293 out.go:368] Setting JSON to false
	I1124 14:07:32.246197  160293 mustload.go:66] Loading cluster: scheduled-stop-400078
	I1124 14:07:32.246498  160293 config.go:182] Loaded profile config "scheduled-stop-400078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:07:32.246560  160293 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/scheduled-stop-400078/config.json ...
	I1124 14:07:32.246760  160293 mustload.go:66] Loading cluster: scheduled-stop-400078
	I1124 14:07:32.246854  160293 config.go:182] Loaded profile config "scheduled-stop-400078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-400078
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-400078: exit status 7 (65.583778ms)

                                                
                                                
-- stdout --
	scheduled-stop-400078
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-400078 -n scheduled-stop-400078
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-400078 -n scheduled-stop-400078: exit status 7 (64.625652ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-400078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-400078
--- PASS: TestScheduledStopUnix (107.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (157.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3883208951 start -p running-upgrade-754968 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3883208951 start -p running-upgrade-754968 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m39.203475524s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-754968 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-754968 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.729089872s)
helpers_test.go:175: Cleaning up "running-upgrade-754968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-754968
--- PASS: TestRunningBinaryUpgrade (157.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (177.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-928048 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-928048 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.028585796s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-928048
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-928048: (1.845515416s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-928048 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-928048 status --format={{.Host}}: exit status 7 (75.628858ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-928048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-928048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.433315627s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-928048 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-928048 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-928048 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (100.739856ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-928048] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-928048
	    minikube start -p kubernetes-upgrade-928048 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9280482 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-928048 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-928048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-928048 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.224596356s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-928048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-928048
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-928048: (1.995649635s)
--- PASS: TestKubernetesUpgrade (177.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674463 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-674463 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (95.994543ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-674463] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (81.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674463 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674463 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m21.0033868s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-674463 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (81.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674463 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674463 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (26.558438191s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-674463 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-674463 status -o json: exit status 2 (216.412982ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-674463","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-674463
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-512455 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-512455 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (1.382002044s)

                                                
                                                
-- stdout --
	* [false-512455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:09:43.448188  162343 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:09:43.448297  162343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:09:43.448302  162343 out.go:374] Setting ErrFile to fd 2...
	I1124 14:09:43.448306  162343 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:09:43.448541  162343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-132228/.minikube/bin
	I1124 14:09:43.449024  162343 out.go:368] Setting JSON to false
	I1124 14:09:43.449866  162343 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6706,"bootTime":1763986677,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:09:43.449982  162343 start.go:143] virtualization: kvm guest
	I1124 14:09:43.544898  162343 out.go:179] * [false-512455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:09:43.622496  162343 notify.go:221] Checking for updates...
	I1124 14:09:43.732190  162343 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:09:43.884846  162343 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:09:44.029779  162343 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-132228/kubeconfig
	I1124 14:09:44.040438  162343 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-132228/.minikube
	I1124 14:09:44.097077  162343 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:09:44.261012  162343 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:09:44.412001  162343 config.go:182] Loaded profile config "NoKubernetes-674463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1124 14:09:44.412188  162343 config.go:182] Loaded profile config "kubernetes-upgrade-928048": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:09:44.412308  162343 config.go:182] Loaded profile config "running-upgrade-754968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1124 14:09:44.412458  162343 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:09:44.492264  162343 out.go:179] * Using the kvm2 driver based on user configuration
	I1124 14:09:44.554942  162343 start.go:309] selected driver: kvm2
	I1124 14:09:44.554973  162343 start.go:927] validating driver "kvm2" against <nil>
	I1124 14:09:44.554992  162343 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:09:44.621364  162343 out.go:203] 
	W1124 14:09:44.678969  162343 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1124 14:09:44.700045  162343 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-512455 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-512455" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 14:09:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.60:8443
name: NoKubernetes-674463
contexts:
- context:
cluster: NoKubernetes-674463
extensions:
- extension:
last-update: Mon, 24 Nov 2025 14:09:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-674463
name: NoKubernetes-674463
current-context: NoKubernetes-674463
kind: Config
users:
- name: NoKubernetes-674463
user:
client-certificate: /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/NoKubernetes-674463/client.crt
client-key: /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/NoKubernetes-674463/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-512455

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-512455"

                                                
                                                
----------------------- debugLogs end: false-512455 [took: 3.957050945s] --------------------------------
helpers_test.go:175: Cleaning up "false-512455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-512455
--- PASS: TestNetworkPlugins/group/false (5.53s)

                                                
                                    
x
+
TestISOImage/Setup (30.28s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-472373 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-472373 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.281346979s)
--- PASS: TestISOImage/Setup (30.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (44.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674463 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674463 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.615904127s)
--- PASS: TestNoKubernetes/serial/Start (44.62s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.27s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.27s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.32s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.32s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.4s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.40s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.26s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.26s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.25s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.25s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21932-132228/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-674463 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-674463 "sudo systemctl is-active --quiet service kubelet": exit status 1 (159.944447ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-674463
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-674463: (1.2745611s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (55.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-674463 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-674463 --driver=kvm2  --container-runtime=crio: (55.919095003s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (55.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-674463 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-674463 "sudo systemctl is-active --quiet service kubelet": exit status 1 (182.779567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.02s)

                                                
                                    
x
+
TestPause/serial/Start (118.49s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-496021 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-496021 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m58.492310795s)
--- PASS: TestPause/serial/Start (118.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (141.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3726368689 start -p stopped-upgrade-278951 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1124 14:12:03.770224  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3726368689 start -p stopped-upgrade-278951 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m36.403651997s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3726368689 -p stopped-upgrade-278951 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3726368689 -p stopped-upgrade-278951 stop: (1.472372455s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-278951 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-278951 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.68603849s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (141.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (106.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m46.270219518s)
--- PASS: TestNetworkPlugins/group/auto/Start (106.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-496021 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-496021 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.9255586s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-278951
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-278951: (1.193406085s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m31.437919802s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.44s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-496021 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-496021 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-496021 --output=json --layout=cluster: exit status 2 (247.573847ms)

                                                
                                                
-- stdout --
	{"Name":"pause-496021","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-496021","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-496021 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-496021 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-496021 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m18.665041598s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-512455 "pgrep -a kubelet"
I1124 14:14:38.406597  136268 config.go:182] Loaded profile config "auto-512455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-512455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rdpsj" [1c571df3-df66-4f58-bafb-38f8b4b18e26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rdpsj" [1c571df3-df66-4f58-bafb-38f8b4b18e26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.006697379s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-512455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m14.104293822s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-h9dl5" [f19b0d98-9ae0-47fd-87e7-edef33dc4d16] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004028384s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5pc92" [74c077c9-26fa-42af-b586-80b022fd5e36] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-5pc92" [74c077c9-26fa-42af-b586-80b022fd5e36] Running
I1124 14:15:56.519682  136268 config.go:182] Loaded profile config "kindnet-512455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004742713s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m23.823529175s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-512455 "pgrep -a kubelet"
E1124 14:15:56.376559  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-512455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x9gkq" [a97a546a-a56b-4b06-9c56-6744ab83e32d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x9gkq" [a97a546a-a56b-4b06-9c56-6744ab83e32d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.008321393s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-512455 "pgrep -a kubelet"
I1124 14:15:58.609966  136268 config.go:182] Loaded profile config "calico-512455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-512455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bzm2h" [d1af678c-fafd-430f-aeac-50cd9af1ae0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bzm2h" [d1af678c-fafd-430f-aeac-50cd9af1ae0f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004538658s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-512455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-512455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-512455 "pgrep -a kubelet"
I1124 14:16:20.104372  136268 config.go:182] Loaded profile config "custom-flannel-512455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-512455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-512455 replace --force -f testdata/netcat-deployment.yaml: (1.322281763s)
I1124 14:16:21.479616  136268 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1124 14:16:21.849854  136268 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cgxtk" [9ffc947b-be94-4e95-9a01-883a704edfc4] Pending
helpers_test.go:352: "netcat-cd4db9dbf-cgxtk" [9ffc947b-be94-4e95-9a01-883a704edfc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cgxtk" [9ffc947b-be94-4e95-9a01-883a704edfc4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004440666s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m10.104352205s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-512455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m36.317308527s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-512455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (106.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-544265 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1124 14:17:03.770719  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-544265 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m46.542953018s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (106.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-512455 "pgrep -a kubelet"
I1124 14:17:16.985666  136268 config.go:182] Loaded profile config "enable-default-cni-512455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-512455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z5gmr" [821b3674-60a3-4ae6-a164-16c5862fe136] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z5gmr" [821b3674-60a3-4ae6-a164-16c5862fe136] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.004240808s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-512455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-4z559" [5461e9d3-a1c8-42d4-a657-80785ae57ea0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004829137s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-512455 "pgrep -a kubelet"
I1124 14:17:40.554955  136268 config.go:182] Loaded profile config "flannel-512455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-512455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xgz8z" [c3884d87-eabb-4893-91e7-d337431e989f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xgz8z" [c3884d87-eabb-4893-91e7-d337431e989f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.0055916s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (98.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-909025 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-909025 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m38.951837709s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (98.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-512455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-512455 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-512455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bmzx5" [18ae6b3f-75de-4e89-9c19-8c9595ccda25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bmzx5" [18ae6b3f-75de-4e89-9c19-8c9595ccda25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004130401s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-932986 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-932986 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m21.984851288s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-512455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-512455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-956572 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-956572 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.562549592s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-544265 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5fcd84cb-6d72-49ad-a193-e1c89fad71b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5fcd84cb-6d72-49ad-a193-e1c89fad71b8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004572141s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-544265 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-544265 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-544265 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.058746004s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-544265 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (87.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-544265 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-544265 --alsologtostderr -v=3: (1m27.063312815s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (87.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-909025 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f2ea5661-12be-4dd5-b721-d0933954674b] Pending
helpers_test.go:352: "busybox" [f2ea5661-12be-4dd5-b721-d0933954674b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f2ea5661-12be-4dd5-b721-d0933954674b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.00441439s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-909025 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-932986 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8e72d46d-bd2f-43f2-8921-0612f2131873] Pending
helpers_test.go:352: "busybox" [8e72d46d-bd2f-43f2-8921-0612f2131873] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8e72d46d-bd2f-43f2-8921-0612f2131873] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004202573s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-932986 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-909025 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-909025 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (71.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-909025 --alsologtostderr -v=3
E1124 14:19:38.652884  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:19:38.659285  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:19:38.670618  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:19:38.691951  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:19:38.733294  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:19:38.814700  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:19:38.976909  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:19:39.298883  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-909025 --alsologtostderr -v=3: (1m11.416133292s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (71.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-932986 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1124 14:19:39.940305  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-932986 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (88.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-932986 --alsologtostderr -v=3
E1124 14:19:41.221921  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:19:43.783864  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:19:48.905811  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-932986 --alsologtostderr -v=3: (1m28.237616456s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (88.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-956572 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bab1803a-67b6-4fc5-a243-2e2b5d047f2a] Pending
helpers_test.go:352: "busybox" [bab1803a-67b6-4fc5-a243-2e2b5d047f2a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1124 14:19:59.147546  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [bab1803a-67b6-4fc5-a243-2e2b5d047f2a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004287236s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-956572 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-956572 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-956572 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (90.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-956572 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-956572 --alsologtostderr -v=3: (1m30.620085083s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (90.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-544265 -n old-k8s-version-544265
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-544265 -n old-k8s-version-544265: exit status 7 (63.4673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-544265 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-544265 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1124 14:20:19.629564  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:39.446413  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-544265 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (42.873919997s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-544265 -n old-k8s-version-544265
E1124 14:21:00.583399  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:00.590857  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/auto-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-909025 -n no-preload-909025
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-909025 -n no-preload-909025: exit status 7 (65.302544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-909025 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-909025 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 14:20:50.329201  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:50.335595  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:50.346944  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:50.368298  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:50.409699  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:50.491217  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:50.653328  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:50.975022  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:51.617232  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:52.400470  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:52.406831  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:52.418176  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:52.439573  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:52.481035  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:52.562595  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:52.724193  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:52.899280  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:53.045953  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:53.687766  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:54.969294  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:55.461235  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:56.376403  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/functional-419891/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:20:57.531462  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-909025 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (56.506494666s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-909025 -n no-preload-909025
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ls5sw" [5c84e5ee-522a-41b0-bf39-16b1a16e4fd1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1124 14:21:02.653096  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ls5sw" [5c84e5ee-522a-41b0-bf39-16b1a16e4fd1] Running
E1124 14:21:10.825084  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:12.895074  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.004520102s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-932986 -n embed-certs-932986
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-932986 -n embed-certs-932986: exit status 7 (71.69ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-932986 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-932986 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-932986 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (45.007480431s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-932986 -n embed-certs-932986
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-ls5sw" [5c84e5ee-522a-41b0-bf39-16b1a16e4fd1] Running
E1124 14:21:21.429798  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:21.436267  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:21.447797  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:21.469219  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:21.510608  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:21.592056  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004658165s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-544265 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-544265 image list --format=json
E1124 14:21:21.754469  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-544265 --alsologtostderr -v=1
E1124 14:21:22.076150  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:22.718412  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-544265 -n old-k8s-version-544265
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-544265 -n old-k8s-version-544265: exit status 2 (244.976572ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-544265 -n old-k8s-version-544265
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-544265 -n old-k8s-version-544265: exit status 2 (242.537791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-544265 --alsologtostderr -v=1
E1124 14:21:24.000417  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-544265 -n old-k8s-version-544265
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-544265 -n old-k8s-version-544265
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-809759 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 14:21:26.562096  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:31.306470  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:31.684280  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:21:33.377292  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-809759 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (47.600959257s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-956572 -n default-k8s-diff-port-956572
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-956572 -n default-k8s-diff-port-956572: exit status 7 (90.475808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-956572 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-956572 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 14:21:41.925874  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-956572 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (55.936623738s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-956572 -n default-k8s-diff-port-956572
E1124 14:22:35.002899  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nw5hl" [2c83a18d-a5d3-497e-9596-d97973555332] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nw5hl" [2c83a18d-a5d3-497e-9596-d97973555332] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.006151767s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m4jpp" [8995120f-a39b-454d-a4c6-ceff6db6fffc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m4jpp" [8995120f-a39b-454d-a4c6-ceff6db6fffc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.003824365s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nw5hl" [2c83a18d-a5d3-497e-9596-d97973555332] Running
E1124 14:22:02.408000  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:03.771034  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/addons-377447/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003738901s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-909025 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-909025 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-909025 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-909025 --alsologtostderr -v=1: (1.029545268s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-909025 -n no-preload-909025
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-909025 -n no-preload-909025: exit status 2 (237.03846ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-909025 -n no-preload-909025
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-909025 -n no-preload-909025: exit status 2 (240.376034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-909025 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-909025 -n no-preload-909025
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-909025 -n no-preload-909025
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.19s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
--- PASS: TestISOImage/VersionJSON (0.19s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-472373 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)
E1124 14:22:12.268221  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/kindnet-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m4jpp" [8995120f-a39b-454d-a4c6-ceff6db6fffc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004288496s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-932986 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1124 14:22:17.287711  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-809759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1124 14:22:14.338893  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/calico-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-809759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.004851417s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-809759 --alsologtostderr -v=3
E1124 14:22:17.206496  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:17.212913  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:17.224291  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:17.245742  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-809759 --alsologtostderr -v=3: (11.630223093s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-932986 image list --format=json
E1124 14:22:17.369813  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:17.531641  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-932986 --alsologtostderr -v=1
E1124 14:22:17.853256  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:18.495384  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-932986 --alsologtostderr -v=1: (1.057262559s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-932986 -n embed-certs-932986
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-932986 -n embed-certs-932986: exit status 2 (239.378762ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-932986 -n embed-certs-932986
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-932986 -n embed-certs-932986: exit status 2 (229.152515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-932986 --alsologtostderr -v=1
E1124 14:22:19.777564  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-932986 -n embed-certs-932986
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-932986 -n embed-certs-932986
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-809759 -n newest-cni-809759
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-809759 -n newest-cni-809759: exit status 7 (63.412659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-809759 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-809759 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 14:22:27.462051  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:34.355975  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:34.362996  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:34.374383  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:34.395935  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:34.437495  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:34.519050  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:34.680653  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-809759 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (31.247314459s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-809759 -n newest-cni-809759
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pd6xv" [42ea883f-e9db-4a8f-9a3f-271438161f53] Running
E1124 14:22:35.644804  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:36.926839  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:37.703874  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:39.488729  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004115438s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pd6xv" [42ea883f-e9db-4a8f-9a3f-271438161f53] Running
E1124 14:22:43.370411  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/custom-flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:22:44.610552  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/flannel-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003690689s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-956572 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-956572 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-956572 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-956572 -n default-k8s-diff-port-956572
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-956572 -n default-k8s-diff-port-956572: exit status 2 (217.109606ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-956572 -n default-k8s-diff-port-956572
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-956572 -n default-k8s-diff-port-956572: exit status 2 (202.203621ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-956572 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-956572 --alsologtostderr -v=1: (1.129994965s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-956572 -n default-k8s-diff-port-956572
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-956572 -n default-k8s-diff-port-956572
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-809759 image list --format=json
E1124 14:22:58.185348  136268 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/enable-default-cni-512455/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-809759 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-809759 -n newest-cni-809759
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-809759 -n newest-cni-809759: exit status 2 (203.54538ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-809759 -n newest-cni-809759
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-809759 -n newest-cni-809759: exit status 2 (206.827155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-809759 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-809759 -n newest-cni-809759
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-809759 -n newest-cni-809759
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.13s)

                                                
                                    

Test skip (40/351)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.28
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
147 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
259 TestNetworkPlugins/group/kubenet 5.1
268 TestNetworkPlugins/group/cilium 4.44
295 TestStartStop/group/disable-driver-mounts 0.17
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-377447 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-512455 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-512455" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 14:09:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.60:8443
name: NoKubernetes-674463
contexts:
- context:
cluster: NoKubernetes-674463
extensions:
- extension:
last-update: Mon, 24 Nov 2025 14:09:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-674463
name: NoKubernetes-674463
current-context: NoKubernetes-674463
kind: Config
users:
- name: NoKubernetes-674463
user:
client-certificate: /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/NoKubernetes-674463/client.crt
client-key: /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/NoKubernetes-674463/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-512455

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-512455"

                                                
                                                
----------------------- debugLogs end: kubenet-512455 [took: 4.512962084s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-512455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-512455
--- SKIP: TestNetworkPlugins/group/kubenet (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-512455 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-512455" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-132228/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 14:09:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.60:8443
name: NoKubernetes-674463
contexts:
- context:
cluster: NoKubernetes-674463
extensions:
- extension:
last-update: Mon, 24 Nov 2025 14:09:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-674463
name: NoKubernetes-674463
current-context: NoKubernetes-674463
kind: Config
users:
- name: NoKubernetes-674463
user:
client-certificate: /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/NoKubernetes-674463/client.crt
client-key: /home/jenkins/minikube-integration/21932-132228/.minikube/profiles/NoKubernetes-674463/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-512455

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-512455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-512455"

                                                
                                                
----------------------- debugLogs end: cilium-512455 [took: 4.265361758s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-512455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-512455
--- SKIP: TestNetworkPlugins/group/cilium (4.44s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-597120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-597120
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard