Test Report: KVM_Linux_crio 21997

                    
                      f52e7af1cf54d5c1b3af81f5f4f56bb8b0b6d6f9:2025-12-01:42595
                    
                

Test fail (3/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 160.29
345 TestPreload 145.22
417 TestPause/serial/SecondStartNoReconfiguration 66.5
x
+
TestAddons/parallel/Ingress (160.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-153147 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-153147 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-153147 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4486d923-4013-47f9-8cd9-a81f1ddebd66] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4486d923-4013-47f9-8cd9-a81f1ddebd66] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.004990295s
I1201 19:08:33.763372   16868 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-153147 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.498830247s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-153147 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.9
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-153147 -n addons-153147
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 logs -n 25: (1.177767158s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-433667                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-433667 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ --download-only -p binary-mirror-004263 --alsologtostderr --binary-mirror http://127.0.0.1:36255 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-004263 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ -p binary-mirror-004263                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-004263 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ addons  │ enable dashboard -p addons-153147                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ addons  │ disable dashboard -p addons-153147                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ start   │ -p addons-153147 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:07 UTC │
	│ addons  │ addons-153147 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:07 UTC │ 01 Dec 25 19:07 UTC │
	│ addons  │ addons-153147 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:07 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-153147 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-153147 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-153147 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ enable headlamp -p addons-153147 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-153147 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ ip      │ addons-153147 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-153147 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ ssh     │ addons-153147 ssh cat /opt/local-path-provisioner/pvc-4148b11a-9b36-46c4-a96c-f1c2e80569aa_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-153147 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:09 UTC │
	│ addons  │ addons-153147 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ ssh     │ addons-153147 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │                     │
	│ addons  │ addons-153147 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-153147                                                                                                                                                                                                                                                                                                                                                                                         │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-153147 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:08 UTC │ 01 Dec 25 19:08 UTC │
	│ addons  │ addons-153147 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:09 UTC │ 01 Dec 25 19:09 UTC │
	│ addons  │ addons-153147 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:09 UTC │ 01 Dec 25 19:09 UTC │
	│ ip      │ addons-153147 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-153147        │ jenkins │ v1.37.0 │ 01 Dec 25 19:10 UTC │ 01 Dec 25 19:10 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:05:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:05:31.586200   17783 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:05:31.586307   17783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:31.586316   17783 out.go:374] Setting ErrFile to fd 2...
	I1201 19:05:31.586320   17783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:31.586507   17783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:05:31.587012   17783 out.go:368] Setting JSON to false
	I1201 19:05:31.587800   17783 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2875,"bootTime":1764613057,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:05:31.587869   17783 start.go:143] virtualization: kvm guest
	I1201 19:05:31.589644   17783 out.go:179] * [addons-153147] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:05:31.590948   17783 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:05:31.590966   17783 notify.go:221] Checking for updates...
	I1201 19:05:31.593340   17783 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:05:31.594408   17783 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 19:05:31.595550   17783 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:05:31.596925   17783 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:05:31.598064   17783 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:05:31.599325   17783 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:05:31.628951   17783 out.go:179] * Using the kvm2 driver based on user configuration
	I1201 19:05:31.630049   17783 start.go:309] selected driver: kvm2
	I1201 19:05:31.630068   17783 start.go:927] validating driver "kvm2" against <nil>
	I1201 19:05:31.630078   17783 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:05:31.630735   17783 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 19:05:31.631008   17783 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 19:05:31.631034   17783 cni.go:84] Creating CNI manager for ""
	I1201 19:05:31.631070   17783 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 19:05:31.631078   17783 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 19:05:31.631116   17783 start.go:353] cluster config:
	{Name:addons-153147 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1201 19:05:31.631220   17783 iso.go:125] acquiring lock: {Name:mk6a50ce57553a723db22dad35f70cd00228e9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 19:05:31.633418   17783 out.go:179] * Starting "addons-153147" primary control-plane node in "addons-153147" cluster
	I1201 19:05:31.634405   17783 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:05:31.634430   17783 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 19:05:31.634437   17783 cache.go:65] Caching tarball of preloaded images
	I1201 19:05:31.634515   17783 preload.go:238] Found /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 19:05:31.634525   17783 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 19:05:31.634822   17783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/config.json ...
	I1201 19:05:31.634855   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/config.json: {Name:mk849ecfa6433efccbb5c4bb5f92de012794f1c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:31.634982   17783 start.go:360] acquireMachinesLock for addons-153147: {Name:mka5785482004af70e425c1e38474157ff061d66 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 19:05:31.635032   17783 start.go:364] duration metric: took 37.672µs to acquireMachinesLock for "addons-153147"
	I1201 19:05:31.635051   17783 start.go:93] Provisioning new machine with config: &{Name:addons-153147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 19:05:31.635092   17783 start.go:125] createHost starting for "" (driver="kvm2")
	I1201 19:05:31.636968   17783 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1201 19:05:31.637118   17783 start.go:159] libmachine.API.Create for "addons-153147" (driver="kvm2")
	I1201 19:05:31.637145   17783 client.go:173] LocalClient.Create starting
	I1201 19:05:31.637232   17783 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem
	I1201 19:05:31.750756   17783 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem
	I1201 19:05:31.908343   17783 main.go:143] libmachine: creating domain...
	I1201 19:05:31.908364   17783 main.go:143] libmachine: creating network...
	I1201 19:05:31.909718   17783 main.go:143] libmachine: found existing default network
	I1201 19:05:31.909949   17783 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1201 19:05:31.910425   17783 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e362d0}
	I1201 19:05:31.910510   17783 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-153147</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1201 19:05:31.916164   17783 main.go:143] libmachine: creating private network mk-addons-153147 192.168.39.0/24...
	I1201 19:05:31.981515   17783 main.go:143] libmachine: private network mk-addons-153147 192.168.39.0/24 created
	I1201 19:05:31.981860   17783 main.go:143] libmachine: <network>
	  <name>mk-addons-153147</name>
	  <uuid>b23d370b-3063-4245-a4b3-cd356384ef08</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:5a:39:d5'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1201 19:05:31.981889   17783 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147 ...
	I1201 19:05:31.981910   17783 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21997-12903/.minikube/cache/iso/amd64/minikube-v1.37.0-1764600683-21997-amd64.iso
	I1201 19:05:31.981920   17783 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:05:31.981974   17783 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21997-12903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21997-12903/.minikube/cache/iso/amd64/minikube-v1.37.0-1764600683-21997-amd64.iso...
	I1201 19:05:32.250954   17783 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa...
	I1201 19:05:32.362602   17783 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/addons-153147.rawdisk...
	I1201 19:05:32.362642   17783 main.go:143] libmachine: Writing magic tar header
	I1201 19:05:32.362667   17783 main.go:143] libmachine: Writing SSH key tar header
	I1201 19:05:32.362741   17783 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147 ...
	I1201 19:05:32.362805   17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147
	I1201 19:05:32.362840   17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147 (perms=drwx------)
	I1201 19:05:32.362850   17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube/machines
	I1201 19:05:32.362858   17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube/machines (perms=drwxr-xr-x)
	I1201 19:05:32.362868   17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:05:32.362876   17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube (perms=drwxr-xr-x)
	I1201 19:05:32.362885   17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903
	I1201 19:05:32.362893   17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903 (perms=drwxrwxr-x)
	I1201 19:05:32.362903   17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1201 19:05:32.362910   17783 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1201 19:05:32.362922   17783 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1201 19:05:32.362930   17783 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1201 19:05:32.362941   17783 main.go:143] libmachine: checking permissions on dir: /home
	I1201 19:05:32.362954   17783 main.go:143] libmachine: skipping /home - not owner
	I1201 19:05:32.362960   17783 main.go:143] libmachine: defining domain...
	I1201 19:05:32.364342   17783 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-153147</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/addons-153147.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-153147'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1201 19:05:32.371809   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:74:84:3b in network default
	I1201 19:05:32.372397   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:32.372414   17783 main.go:143] libmachine: starting domain...
	I1201 19:05:32.372418   17783 main.go:143] libmachine: ensuring networks are active...
	I1201 19:05:32.373133   17783 main.go:143] libmachine: Ensuring network default is active
	I1201 19:05:32.373457   17783 main.go:143] libmachine: Ensuring network mk-addons-153147 is active
	I1201 19:05:32.374013   17783 main.go:143] libmachine: getting domain XML...
	I1201 19:05:32.375027   17783 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-153147</name>
	  <uuid>b210d02d-07be-4131-97b5-bb937549f8ab</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/addons-153147.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b9:bf:db'/>
	      <source network='mk-addons-153147'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:74:84:3b'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1201 19:05:33.655296   17783 main.go:143] libmachine: waiting for domain to start...
	I1201 19:05:33.656813   17783 main.go:143] libmachine: domain is now running
	I1201 19:05:33.656851   17783 main.go:143] libmachine: waiting for IP...
	I1201 19:05:33.657848   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:33.658454   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:33.658512   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:33.658853   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:33.658905   17783 retry.go:31] will retry after 269.230888ms: waiting for domain to come up
	I1201 19:05:33.929366   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:33.929855   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:33.929870   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:33.930157   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:33.930197   17783 retry.go:31] will retry after 305.63835ms: waiting for domain to come up
	I1201 19:05:34.237864   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:34.238366   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:34.238380   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:34.238652   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:34.238690   17783 retry.go:31] will retry after 446.840166ms: waiting for domain to come up
	I1201 19:05:34.687368   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:34.687897   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:34.687911   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:34.688219   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:34.688251   17783 retry.go:31] will retry after 482.929364ms: waiting for domain to come up
	I1201 19:05:35.172982   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:35.173477   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:35.173492   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:35.173818   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:35.173861   17783 retry.go:31] will retry after 517.844571ms: waiting for domain to come up
	I1201 19:05:35.693488   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:35.694026   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:35.694043   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:35.694401   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:35.694434   17783 retry.go:31] will retry after 589.021743ms: waiting for domain to come up
	I1201 19:05:36.285251   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:36.285755   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:36.285770   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:36.286035   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:36.286065   17783 retry.go:31] will retry after 763.414346ms: waiting for domain to come up
	I1201 19:05:37.052005   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:37.052989   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:37.053007   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:37.053334   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:37.053364   17783 retry.go:31] will retry after 1.423779057s: waiting for domain to come up
	I1201 19:05:38.478416   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:38.478986   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:38.479012   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:38.479258   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:38.479294   17783 retry.go:31] will retry after 1.388017801s: waiting for domain to come up
	I1201 19:05:39.868704   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:39.869213   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:39.869226   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:39.869506   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:39.869536   17783 retry.go:31] will retry after 2.181859207s: waiting for domain to come up
	I1201 19:05:42.053090   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:42.053658   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:42.053672   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:42.053966   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:42.053994   17783 retry.go:31] will retry after 2.483985266s: waiting for domain to come up
	I1201 19:05:44.539387   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:44.539921   17783 main.go:143] libmachine: no network interface addresses found for domain addons-153147 (source=lease)
	I1201 19:05:44.539935   17783 main.go:143] libmachine: trying to list again with source=arp
	I1201 19:05:44.540186   17783 main.go:143] libmachine: unable to find current IP address of domain addons-153147 in network mk-addons-153147 (interfaces detected: [])
	I1201 19:05:44.540213   17783 retry.go:31] will retry after 3.116899486s: waiting for domain to come up
	I1201 19:05:47.658994   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:47.659571   17783 main.go:143] libmachine: domain addons-153147 has current primary IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:47.659586   17783 main.go:143] libmachine: found domain IP: 192.168.39.9
	I1201 19:05:47.659597   17783 main.go:143] libmachine: reserving static IP address...
	I1201 19:05:47.660075   17783 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-153147", mac: "52:54:00:b9:bf:db", ip: "192.168.39.9"} in network mk-addons-153147
	I1201 19:05:47.847637   17783 main.go:143] libmachine: reserved static IP address 192.168.39.9 for domain addons-153147
	I1201 19:05:47.847661   17783 main.go:143] libmachine: waiting for SSH...
	I1201 19:05:47.847669   17783 main.go:143] libmachine: Getting to WaitForSSH function...
	I1201 19:05:47.850124   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:47.850532   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:47.850560   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:47.850766   17783 main.go:143] libmachine: Using SSH client type: native
	I1201 19:05:47.850969   17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1201 19:05:47.850978   17783 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1201 19:05:47.961632   17783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 19:05:47.961994   17783 main.go:143] libmachine: domain creation complete
	I1201 19:05:47.963574   17783 machine.go:94] provisionDockerMachine start ...
	I1201 19:05:47.966152   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:47.966564   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:47.966591   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:47.966739   17783 main.go:143] libmachine: Using SSH client type: native
	I1201 19:05:47.966945   17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1201 19:05:47.966958   17783 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 19:05:48.078505   17783 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1201 19:05:48.078531   17783 buildroot.go:166] provisioning hostname "addons-153147"
	I1201 19:05:48.081192   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.081561   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:48.081586   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.081766   17783 main.go:143] libmachine: Using SSH client type: native
	I1201 19:05:48.081982   17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1201 19:05:48.081998   17783 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-153147 && echo "addons-153147" | sudo tee /etc/hostname
	I1201 19:05:48.211218   17783 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-153147
	
	I1201 19:05:48.214103   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.214533   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:48.214563   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.214734   17783 main.go:143] libmachine: Using SSH client type: native
	I1201 19:05:48.215049   17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1201 19:05:48.215077   17783 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-153147' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-153147/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-153147' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 19:05:48.337672   17783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 19:05:48.337698   17783 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12903/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12903/.minikube}
	I1201 19:05:48.337744   17783 buildroot.go:174] setting up certificates
	I1201 19:05:48.337754   17783 provision.go:84] configureAuth start
	I1201 19:05:48.340636   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.341025   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:48.341059   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.343518   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.343934   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:48.343958   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.344096   17783 provision.go:143] copyHostCerts
	I1201 19:05:48.344164   17783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem (1675 bytes)
	I1201 19:05:48.344288   17783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem (1078 bytes)
	I1201 19:05:48.344348   17783 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem (1123 bytes)
	I1201 19:05:48.344420   17783 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem org=jenkins.addons-153147 san=[127.0.0.1 192.168.39.9 addons-153147 localhost minikube]
	I1201 19:05:48.586584   17783 provision.go:177] copyRemoteCerts
	I1201 19:05:48.586636   17783 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 19:05:48.589191   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.589562   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:48.589582   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.589732   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:05:48.673984   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 19:05:48.703274   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 19:05:48.732950   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1201 19:05:48.761737   17783 provision.go:87] duration metric: took 423.969079ms to configureAuth
	I1201 19:05:48.761772   17783 buildroot.go:189] setting minikube options for container-runtime
	I1201 19:05:48.761985   17783 config.go:182] Loaded profile config "addons-153147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:05:48.764885   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.765301   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:48.765331   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.765543   17783 main.go:143] libmachine: Using SSH client type: native
	I1201 19:05:48.765754   17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1201 19:05:48.765775   17783 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 19:05:48.995919   17783 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 19:05:48.995945   17783 machine.go:97] duration metric: took 1.032352169s to provisionDockerMachine
	I1201 19:05:48.995957   17783 client.go:176] duration metric: took 17.358803255s to LocalClient.Create
	I1201 19:05:48.995975   17783 start.go:167] duration metric: took 17.358856135s to libmachine.API.Create "addons-153147"
	I1201 19:05:48.995984   17783 start.go:293] postStartSetup for "addons-153147" (driver="kvm2")
	I1201 19:05:48.995998   17783 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 19:05:48.996063   17783 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 19:05:48.999169   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.999571   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:48.999598   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:48.999755   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:05:49.085082   17783 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 19:05:49.090179   17783 info.go:137] Remote host: Buildroot 2025.02.8
	I1201 19:05:49.090210   17783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/addons for local assets ...
	I1201 19:05:49.090285   17783 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/files for local assets ...
	I1201 19:05:49.090311   17783 start.go:296] duration metric: took 94.320335ms for postStartSetup
	I1201 19:05:49.093335   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.093679   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:49.093703   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.093923   17783 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/config.json ...
	I1201 19:05:49.094095   17783 start.go:128] duration metric: took 17.458994341s to createHost
	I1201 19:05:49.096259   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.096666   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:49.096692   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.096862   17783 main.go:143] libmachine: Using SSH client type: native
	I1201 19:05:49.097036   17783 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.9 22 <nil> <nil>}
	I1201 19:05:49.097052   17783 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1201 19:05:49.203990   17783 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764615949.160336512
	
	I1201 19:05:49.204012   17783 fix.go:216] guest clock: 1764615949.160336512
	I1201 19:05:49.204021   17783 fix.go:229] Guest: 2025-12-01 19:05:49.160336512 +0000 UTC Remote: 2025-12-01 19:05:49.094105721 +0000 UTC m=+17.552949213 (delta=66.230791ms)
	I1201 19:05:49.204041   17783 fix.go:200] guest clock delta is within tolerance: 66.230791ms
	I1201 19:05:49.204047   17783 start.go:83] releasing machines lock for "addons-153147", held for 17.569006332s
	I1201 19:05:49.206481   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.206971   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:49.206998   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.207555   17783 ssh_runner.go:195] Run: cat /version.json
	I1201 19:05:49.207633   17783 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 19:05:49.210535   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.210807   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.211006   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:49.211041   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.211219   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:49.211223   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:05:49.211248   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:49.211402   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:05:49.290335   17783 ssh_runner.go:195] Run: systemctl --version
	I1201 19:05:49.327928   17783 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 19:05:49.488057   17783 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 19:05:49.494308   17783 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 19:05:49.494404   17783 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 19:05:49.514619   17783 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1201 19:05:49.514647   17783 start.go:496] detecting cgroup driver to use...
	I1201 19:05:49.514715   17783 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 19:05:49.532179   17783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 19:05:49.549290   17783 docker.go:218] disabling cri-docker service (if available) ...
	I1201 19:05:49.549358   17783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 19:05:49.565766   17783 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 19:05:49.581413   17783 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 19:05:49.727729   17783 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 19:05:49.928358   17783 docker.go:234] disabling docker service ...
	I1201 19:05:49.928420   17783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 19:05:49.944791   17783 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 19:05:49.959738   17783 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 19:05:50.104541   17783 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 19:05:50.241878   17783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 19:05:50.257183   17783 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 19:05:50.279415   17783 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 19:05:50.279498   17783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:05:50.293978   17783 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 19:05:50.294041   17783 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:05:50.306162   17783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:05:50.319407   17783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:05:50.332285   17783 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 19:05:50.345725   17783 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:05:50.359119   17783 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:05:50.381056   17783 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 19:05:50.393414   17783 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 19:05:50.405369   17783 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1201 19:05:50.405433   17783 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1201 19:05:50.428266   17783 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 19:05:50.443751   17783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 19:05:50.588529   17783 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 19:05:50.693392   17783 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 19:05:50.693486   17783 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 19:05:50.698540   17783 start.go:564] Will wait 60s for crictl version
	I1201 19:05:50.698615   17783 ssh_runner.go:195] Run: which crictl
	I1201 19:05:50.702571   17783 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1201 19:05:50.737545   17783 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1201 19:05:50.737656   17783 ssh_runner.go:195] Run: crio --version
	I1201 19:05:50.767623   17783 ssh_runner.go:195] Run: crio --version
	I1201 19:05:50.798792   17783 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1201 19:05:50.802480   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:50.802809   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:05:50.802851   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:05:50.803071   17783 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1201 19:05:50.807383   17783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 19:05:50.821395   17783 kubeadm.go:884] updating cluster {Name:addons-153147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 19:05:50.821481   17783 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:05:50.821521   17783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 19:05:50.851316   17783 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1201 19:05:50.851423   17783 ssh_runner.go:195] Run: which lz4
	I1201 19:05:50.855473   17783 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1201 19:05:50.859915   17783 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1201 19:05:50.859947   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1201 19:05:52.013121   17783 crio.go:462] duration metric: took 1.157673577s to copy over tarball
	I1201 19:05:52.013200   17783 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1201 19:05:53.447967   17783 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.434744249s)
	I1201 19:05:53.447990   17783 crio.go:469] duration metric: took 1.434841606s to extract the tarball
	I1201 19:05:53.447996   17783 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1201 19:05:53.484551   17783 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 19:05:53.522626   17783 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 19:05:53.522647   17783 cache_images.go:86] Images are preloaded, skipping loading
	I1201 19:05:53.522655   17783 kubeadm.go:935] updating node { 192.168.39.9 8443 v1.34.2 crio true true} ...
	I1201 19:05:53.522729   17783 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-153147 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 19:05:53.522789   17783 ssh_runner.go:195] Run: crio config
	I1201 19:05:53.567884   17783 cni.go:84] Creating CNI manager for ""
	I1201 19:05:53.567906   17783 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 19:05:53.567927   17783 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 19:05:53.567955   17783 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.9 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-153147 NodeName:addons-153147 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 19:05:53.568074   17783 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-153147"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.9"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.9"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 19:05:53.568131   17783 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 19:05:53.579573   17783 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 19:05:53.579629   17783 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 19:05:53.590726   17783 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1201 19:05:53.610871   17783 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 19:05:53.631337   17783 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1201 19:05:53.651765   17783 ssh_runner.go:195] Run: grep 192.168.39.9	control-plane.minikube.internal$ /etc/hosts
	I1201 19:05:53.655721   17783 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.9	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 19:05:53.670504   17783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 19:05:53.808706   17783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 19:05:53.837003   17783 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147 for IP: 192.168.39.9
	I1201 19:05:53.837025   17783 certs.go:195] generating shared ca certs ...
	I1201 19:05:53.837047   17783 certs.go:227] acquiring lock for ca certs: {Name:mk7e1ff47c53decb016970932c61ce60ac92f0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:53.837193   17783 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key
	I1201 19:05:53.894756   17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt ...
	I1201 19:05:53.894783   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt: {Name:mk9d92e4ed7e08dd0b90f17ae2238e4b3cab654f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:53.894965   17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key ...
	I1201 19:05:53.894977   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key: {Name:mkef5ca972f1c69a34c7abb8ad1cfe5908f2c969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:53.895051   17783 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key
	I1201 19:05:54.008875   17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt ...
	I1201 19:05:54.008899   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt: {Name:mk57e693a03a2819def8c3cf0c009113054618ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:54.009057   17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key ...
	I1201 19:05:54.009068   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key: {Name:mk1c9e3ef68f6fdd21e9d3833c157a47757f195c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:54.009133   17783 certs.go:257] generating profile certs ...
	I1201 19:05:54.009180   17783 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.key
	I1201 19:05:54.009194   17783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt with IP's: []
	I1201 19:05:54.209034   17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt ...
	I1201 19:05:54.209068   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: {Name:mk42d80b6d9c11d66552eaaf3a875bce22bfb0f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:54.209710   17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.key ...
	I1201 19:05:54.209728   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.key: {Name:mkdf76aa61afbf60ff90312f9447b1ce21ead418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:54.209868   17783 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key.ea1010ab
	I1201 19:05:54.209892   17783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt.ea1010ab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.9]
	I1201 19:05:54.290382   17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt.ea1010ab ...
	I1201 19:05:54.290408   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt.ea1010ab: {Name:mkbb09da0d23c7ccd21267c6f7310ddc23bc0f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:54.290563   17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key.ea1010ab ...
	I1201 19:05:54.290576   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key.ea1010ab: {Name:mk563d827d9c5afb8b9cf8238ec44bfa097e94c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:54.290647   17783 certs.go:382] copying /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt.ea1010ab -> /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt
	I1201 19:05:54.291216   17783 certs.go:386] copying /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key.ea1010ab -> /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key
	I1201 19:05:54.291290   17783 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.key
	I1201 19:05:54.291310   17783 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.crt with IP's: []
	I1201 19:05:54.336866   17783 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.crt ...
	I1201 19:05:54.336896   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.crt: {Name:mkd0ff1eba9b217ab374efa12ac807423e770c6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:54.337062   17783 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.key ...
	I1201 19:05:54.337074   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.key: {Name:mk0d95ada5a63120bc1d07e56cc5ac788f250ee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:05:54.337250   17783 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 19:05:54.337287   17783 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem (1078 bytes)
	I1201 19:05:54.337312   17783 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem (1123 bytes)
	I1201 19:05:54.337334   17783 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem (1675 bytes)
	I1201 19:05:54.337822   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 19:05:54.368457   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 19:05:54.397140   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 19:05:54.426223   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 19:05:54.454176   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 19:05:54.482700   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 19:05:54.515950   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 19:05:54.554672   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 19:05:54.585442   17783 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 19:05:54.614608   17783 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 19:05:54.634421   17783 ssh_runner.go:195] Run: openssl version
	I1201 19:05:54.640647   17783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 19:05:54.653314   17783 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 19:05:54.658195   17783 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1201 19:05:54.658245   17783 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 19:05:54.665193   17783 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 19:05:54.677859   17783 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 19:05:54.682534   17783 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 19:05:54.682596   17783 kubeadm.go:401] StartCluster: {Name:addons-153147 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-153147 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:05:54.682680   17783 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 19:05:54.682759   17783 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 19:05:54.718913   17783 cri.go:89] found id: ""
	I1201 19:05:54.718992   17783 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 19:05:54.731282   17783 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 19:05:54.742651   17783 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 19:05:54.754291   17783 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 19:05:54.754310   17783 kubeadm.go:158] found existing configuration files:
	
	I1201 19:05:54.754353   17783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 19:05:54.764931   17783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 19:05:54.765007   17783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 19:05:54.776128   17783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 19:05:54.786247   17783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 19:05:54.786318   17783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 19:05:54.797437   17783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 19:05:54.807692   17783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 19:05:54.807757   17783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 19:05:54.818667   17783 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 19:05:54.829163   17783 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 19:05:54.829234   17783 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 19:05:54.840332   17783 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1201 19:05:54.980139   17783 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1201 19:06:07.007304   17783 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 19:06:07.007390   17783 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 19:06:07.007497   17783 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 19:06:07.007612   17783 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 19:06:07.007691   17783 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 19:06:07.007769   17783 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 19:06:07.009237   17783 out.go:252]   - Generating certificates and keys ...
	I1201 19:06:07.009314   17783 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 19:06:07.009382   17783 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 19:06:07.009461   17783 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 19:06:07.009511   17783 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 19:06:07.009566   17783 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 19:06:07.009613   17783 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 19:06:07.009660   17783 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 19:06:07.009782   17783 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-153147 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I1201 19:06:07.009869   17783 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 19:06:07.010017   17783 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-153147 localhost] and IPs [192.168.39.9 127.0.0.1 ::1]
	I1201 19:06:07.010089   17783 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 19:06:07.010150   17783 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 19:06:07.010198   17783 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 19:06:07.010252   17783 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 19:06:07.010314   17783 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 19:06:07.010380   17783 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 19:06:07.010463   17783 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 19:06:07.010547   17783 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 19:06:07.010631   17783 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 19:06:07.010742   17783 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 19:06:07.010852   17783 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 19:06:07.012226   17783 out.go:252]   - Booting up control plane ...
	I1201 19:06:07.012352   17783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 19:06:07.012475   17783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 19:06:07.012579   17783 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 19:06:07.012738   17783 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 19:06:07.012887   17783 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 19:06:07.013031   17783 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 19:06:07.013150   17783 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 19:06:07.013195   17783 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 19:06:07.013305   17783 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 19:06:07.013472   17783 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 19:06:07.013533   17783 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00161678s
	I1201 19:06:07.013651   17783 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 19:06:07.013753   17783 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.9:8443/livez
	I1201 19:06:07.013891   17783 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 19:06:07.013991   17783 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 19:06:07.014112   17783 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.618952021s
	I1201 19:06:07.014222   17783 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.636160671s
	I1201 19:06:07.014332   17783 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501932664s
	I1201 19:06:07.014462   17783 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1201 19:06:07.014639   17783 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1201 19:06:07.014719   17783 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1201 19:06:07.014971   17783 kubeadm.go:319] [mark-control-plane] Marking the node addons-153147 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1201 19:06:07.015039   17783 kubeadm.go:319] [bootstrap-token] Using token: 7vt6ii.w2s814lac513ec53
	I1201 19:06:07.016494   17783 out.go:252]   - Configuring RBAC rules ...
	I1201 19:06:07.016589   17783 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1201 19:06:07.016662   17783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1201 19:06:07.016821   17783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1201 19:06:07.017001   17783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1201 19:06:07.017150   17783 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1201 19:06:07.017275   17783 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1201 19:06:07.017469   17783 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1201 19:06:07.017556   17783 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1201 19:06:07.017646   17783 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1201 19:06:07.017656   17783 kubeadm.go:319] 
	I1201 19:06:07.017758   17783 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1201 19:06:07.017775   17783 kubeadm.go:319] 
	I1201 19:06:07.017897   17783 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1201 19:06:07.017909   17783 kubeadm.go:319] 
	I1201 19:06:07.017949   17783 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1201 19:06:07.018037   17783 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1201 19:06:07.018107   17783 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1201 19:06:07.018114   17783 kubeadm.go:319] 
	I1201 19:06:07.018160   17783 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1201 19:06:07.018166   17783 kubeadm.go:319] 
	I1201 19:06:07.018220   17783 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1201 19:06:07.018229   17783 kubeadm.go:319] 
	I1201 19:06:07.018307   17783 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1201 19:06:07.018391   17783 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1201 19:06:07.018486   17783 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1201 19:06:07.018516   17783 kubeadm.go:319] 
	I1201 19:06:07.018622   17783 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1201 19:06:07.018727   17783 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1201 19:06:07.018743   17783 kubeadm.go:319] 
	I1201 19:06:07.018857   17783 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7vt6ii.w2s814lac513ec53 \
	I1201 19:06:07.018946   17783 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f7850289782e26755534bbd10a21d664dd20b89a823d3fd24570eae03b241557 \
	I1201 19:06:07.018964   17783 kubeadm.go:319] 	--control-plane 
	I1201 19:06:07.018967   17783 kubeadm.go:319] 
	I1201 19:06:07.019038   17783 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1201 19:06:07.019043   17783 kubeadm.go:319] 
	I1201 19:06:07.019118   17783 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7vt6ii.w2s814lac513ec53 \
	I1201 19:06:07.019225   17783 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f7850289782e26755534bbd10a21d664dd20b89a823d3fd24570eae03b241557 
	I1201 19:06:07.019248   17783 cni.go:84] Creating CNI manager for ""
	I1201 19:06:07.019255   17783 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 19:06:07.020633   17783 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1201 19:06:07.021733   17783 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1201 19:06:07.035749   17783 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1201 19:06:07.063357   17783 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 19:06:07.063458   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:07.063477   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-153147 minikube.k8s.io/updated_at=2025_12_01T19_06_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9 minikube.k8s.io/name=addons-153147 minikube.k8s.io/primary=true
	I1201 19:06:07.187074   17783 ops.go:34] apiserver oom_adj: -16
	I1201 19:06:07.187138   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:07.687221   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:08.187444   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:08.687566   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:09.187348   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:09.687359   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:10.188197   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:10.687490   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:11.188027   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:11.687755   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:12.188018   17783 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1201 19:06:12.361106   17783 kubeadm.go:1114] duration metric: took 5.297716741s to wait for elevateKubeSystemPrivileges
	I1201 19:06:12.361145   17783 kubeadm.go:403] duration metric: took 17.678554909s to StartCluster
	I1201 19:06:12.361185   17783 settings.go:142] acquiring lock: {Name:mk63d3c798c3f817a653e3e39f757c57080fff76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:12.361318   17783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 19:06:12.361798   17783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/kubeconfig: {Name:mkf67691ba90fcc0b34f838eaae92a26f4e31096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 19:06:12.362047   17783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1201 19:06:12.362078   17783 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 19:06:12.362163   17783 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1201 19:06:12.362310   17783 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-153147"
	I1201 19:06:12.362324   17783 addons.go:70] Setting yakd=true in profile "addons-153147"
	I1201 19:06:12.362624   17783 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-153147"
	I1201 19:06:12.362673   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.362698   17783 addons.go:239] Setting addon yakd=true in "addons-153147"
	I1201 19:06:12.362874   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.362913   17783 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-153147"
	I1201 19:06:12.362938   17783 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-153147"
	I1201 19:06:12.362955   17783 config.go:182] Loaded profile config "addons-153147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:06:12.362980   17783 addons.go:70] Setting cloud-spanner=true in profile "addons-153147"
	I1201 19:06:12.363991   17783 addons.go:239] Setting addon cloud-spanner=true in "addons-153147"
	I1201 19:06:12.364044   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.363000   17783 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-153147"
	I1201 19:06:12.364497   17783 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-153147"
	I1201 19:06:12.364523   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.363010   17783 addons.go:70] Setting default-storageclass=true in profile "addons-153147"
	I1201 19:06:12.364594   17783 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-153147"
	I1201 19:06:12.363019   17783 addons.go:70] Setting inspektor-gadget=true in profile "addons-153147"
	I1201 19:06:12.364991   17783 addons.go:239] Setting addon inspektor-gadget=true in "addons-153147"
	I1201 19:06:12.363022   17783 addons.go:70] Setting ingress-dns=true in profile "addons-153147"
	I1201 19:06:12.363031   17783 addons.go:70] Setting gcp-auth=true in profile "addons-153147"
	I1201 19:06:12.363027   17783 addons.go:70] Setting metrics-server=true in profile "addons-153147"
	I1201 19:06:12.363040   17783 addons.go:70] Setting ingress=true in profile "addons-153147"
	I1201 19:06:12.363044   17783 addons.go:70] Setting storage-provisioner=true in profile "addons-153147"
	I1201 19:06:12.363069   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.363107   17783 addons.go:70] Setting registry-creds=true in profile "addons-153147"
	I1201 19:06:12.363107   17783 addons.go:70] Setting registry=true in profile "addons-153147"
	I1201 19:06:12.363163   17783 addons.go:70] Setting volumesnapshots=true in profile "addons-153147"
	I1201 19:06:12.363223   17783 addons.go:70] Setting volcano=true in profile "addons-153147"
	I1201 19:06:12.363335   17783 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-153147"
	I1201 19:06:12.365083   17783 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-153147"
	I1201 19:06:12.365140   17783 addons.go:239] Setting addon ingress=true in "addons-153147"
	I1201 19:06:12.365166   17783 addons.go:239] Setting addon ingress-dns=true in "addons-153147"
	I1201 19:06:12.365189   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.365200   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.365362   17783 addons.go:239] Setting addon registry=true in "addons-153147"
	I1201 19:06:12.365434   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.365465   17783 out.go:179] * Verifying Kubernetes components...
	I1201 19:06:12.365093   17783 addons.go:239] Setting addon storage-provisioner=true in "addons-153147"
	I1201 19:06:12.365921   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.366131   17783 addons.go:239] Setting addon registry-creds=true in "addons-153147"
	I1201 19:06:12.366165   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.365118   17783 mustload.go:66] Loading cluster: addons-153147
	I1201 19:06:12.365129   17783 addons.go:239] Setting addon metrics-server=true in "addons-153147"
	I1201 19:06:12.366182   17783 addons.go:239] Setting addon volumesnapshots=true in "addons-153147"
	I1201 19:06:12.366193   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.366205   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.365147   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.366690   17783 addons.go:239] Setting addon volcano=true in "addons-153147"
	I1201 19:06:12.366766   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.367386   17783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 19:06:12.367879   17783 config.go:182] Loaded profile config "addons-153147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:06:12.374765   17783 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-153147"
	I1201 19:06:12.374768   17783 addons.go:239] Setting addon default-storageclass=true in "addons-153147"
	I1201 19:06:12.374804   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.374811   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.375506   17783 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1201 19:06:12.375516   17783 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1201 19:06:12.375546   17783 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1201 19:06:12.375613   17783 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1201 19:06:12.375622   17783 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1201 19:06:12.376703   17783 out.go:179]   - Using image docker.io/registry:3.0.0
	I1201 19:06:12.376714   17783 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	W1201 19:06:12.377151   17783 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1201 19:06:12.377505   17783 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1201 19:06:12.377526   17783 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 19:06:12.377560   17783 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1201 19:06:12.378876   17783 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1201 19:06:12.378903   17783 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 19:06:12.378922   17783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 19:06:12.377563   17783 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1201 19:06:12.378979   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1201 19:06:12.377587   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:12.378383   17783 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1201 19:06:12.378399   17783 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1201 19:06:12.378413   17783 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 19:06:12.379437   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1201 19:06:12.379634   17783 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1201 19:06:12.379670   17783 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 19:06:12.379676   17783 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1201 19:06:12.379681   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1201 19:06:12.380578   17783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 19:06:12.380598   17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1201 19:06:12.380611   17783 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1201 19:06:12.380577   17783 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 19:06:12.380601   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 19:06:12.381259   17783 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1201 19:06:12.381269   17783 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1201 19:06:12.381285   17783 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1201 19:06:12.381290   17783 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 19:06:12.381806   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1201 19:06:12.381336   17783 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 19:06:12.381953   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1201 19:06:12.381371   17783 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1201 19:06:12.381988   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1201 19:06:12.382656   17783 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1201 19:06:12.382674   17783 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1201 19:06:12.382741   17783 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1201 19:06:12.382809   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1201 19:06:12.383597   17783 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 19:06:12.383616   17783 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1201 19:06:12.383631   17783 out.go:179]   - Using image docker.io/busybox:stable
	I1201 19:06:12.384787   17783 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 19:06:12.384798   17783 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 19:06:12.384803   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1201 19:06:12.384812   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1201 19:06:12.385008   17783 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1201 19:06:12.386372   17783 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1201 19:06:12.387616   17783 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1201 19:06:12.388532   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.388994   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.389429   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.389769   17783 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1201 19:06:12.390209   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.390462   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.390502   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.390823   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.390867   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.391045   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.391141   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.391182   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.391394   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.391795   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.391853   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.391897   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.391921   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.392205   17783 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1201 19:06:12.392375   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.392841   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.392854   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.392864   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.392876   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.393495   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.393505   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.393498   17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1201 19:06:12.393535   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.393541   17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1201 19:06:12.394018   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.394029   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.394083   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.394281   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.394404   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.395009   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.395268   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.395634   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.395664   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.395713   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.395748   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.395778   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.396038   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.396048   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.396353   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.396570   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.397072   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.397074   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.397181   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.397245   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.397277   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.397354   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.397387   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.397393   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.397416   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.397426   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.397707   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.397714   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.397746   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.398155   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.398193   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.398555   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:12.399788   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.400194   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:12.400226   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:12.400367   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	W1201 19:06:12.769755   17783 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40750->192.168.39.9:22: read: connection reset by peer
	I1201 19:06:12.769786   17783 retry.go:31] will retry after 332.254015ms: ssh: handshake failed: read tcp 192.168.39.1:40750->192.168.39.9:22: read: connection reset by peer
	I1201 19:06:13.378707   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1201 19:06:13.379229   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1201 19:06:13.383792   17783 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1201 19:06:13.383816   17783 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1201 19:06:13.399655   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 19:06:13.450814   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1201 19:06:13.461244   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 19:06:13.501944   17783 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1201 19:06:13.501976   17783 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1201 19:06:13.506703   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1201 19:06:13.531015   17783 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.16894159s)
	I1201 19:06:13.531119   17783 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.163707502s)
	I1201 19:06:13.531167   17783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1201 19:06:13.531190   17783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 19:06:13.578443   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1201 19:06:13.653139   17783 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1201 19:06:13.653166   17783 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1201 19:06:13.657848   17783 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1201 19:06:13.657872   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1201 19:06:13.657851   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1201 19:06:13.695553   17783 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1201 19:06:13.695578   17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1201 19:06:13.733632   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1201 19:06:13.736993   17783 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1201 19:06:13.737022   17783 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1201 19:06:13.739912   17783 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1201 19:06:13.739939   17783 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1201 19:06:13.926366   17783 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1201 19:06:13.926395   17783 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1201 19:06:13.932656   17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1201 19:06:13.932684   17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1201 19:06:13.944186   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1201 19:06:13.969787   17783 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1201 19:06:13.969823   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1201 19:06:14.017776   17783 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1201 19:06:14.017809   17783 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1201 19:06:14.055367   17783 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1201 19:06:14.055400   17783 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1201 19:06:14.177601   17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1201 19:06:14.177630   17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1201 19:06:14.196098   17783 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 19:06:14.196125   17783 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1201 19:06:14.273628   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1201 19:06:14.277136   17783 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1201 19:06:14.277165   17783 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1201 19:06:14.356566   17783 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1201 19:06:14.356588   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1201 19:06:14.460481   17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1201 19:06:14.460512   17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1201 19:06:14.552963   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1201 19:06:14.675511   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1201 19:06:14.720360   17783 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 19:06:14.720381   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1201 19:06:14.901038   17783 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1201 19:06:14.901064   17783 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1201 19:06:15.110995   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 19:06:15.175784   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.797042055s)
	I1201 19:06:15.280604   17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1201 19:06:15.280635   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1201 19:06:15.716850   17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1201 19:06:15.716874   17783 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1201 19:06:16.101042   17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1201 19:06:16.101066   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1201 19:06:16.751087   17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1201 19:06:16.751112   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1201 19:06:17.098207   17783 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 19:06:17.098239   17783 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1201 19:06:17.378938   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1201 19:06:18.176787   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.797526868s)
	I1201 19:06:18.176856   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.777171594s)
	I1201 19:06:18.176895   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.726030797s)
	I1201 19:06:18.305157   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.798414642s)
	I1201 19:06:18.305202   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.843918943s)
	I1201 19:06:18.305235   17783 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.774031567s)
	I1201 19:06:18.305292   17783 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.774103667s)
	I1201 19:06:18.305317   17783 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1201 19:06:18.306114   17783 node_ready.go:35] waiting up to 6m0s for node "addons-153147" to be "Ready" ...
	I1201 19:06:18.348113   17783 node_ready.go:49] node "addons-153147" is "Ready"
	I1201 19:06:18.348146   17783 node_ready.go:38] duration metric: took 42.004217ms for node "addons-153147" to be "Ready" ...
	I1201 19:06:18.348162   17783 api_server.go:52] waiting for apiserver process to appear ...
	I1201 19:06:18.348300   17783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 19:06:18.941724   17783 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-153147" context rescaled to 1 replicas
	I1201 19:06:19.193426   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.614952842s)
	I1201 19:06:19.193517   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.535592045s)
	I1201 19:06:19.832025   17783 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1201 19:06:19.834709   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:19.835083   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:19.835106   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:19.835267   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:20.036395   17783 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1201 19:06:20.137394   17783 addons.go:239] Setting addon gcp-auth=true in "addons-153147"
	I1201 19:06:20.137446   17783 host.go:66] Checking if "addons-153147" exists ...
	I1201 19:06:20.139423   17783 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1201 19:06:20.141876   17783 main.go:143] libmachine: domain addons-153147 has defined MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:20.142366   17783 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:bf:db", ip: ""} in network mk-addons-153147: {Iface:virbr1 ExpiryTime:2025-12-01 20:05:47 +0000 UTC Type:0 Mac:52:54:00:b9:bf:db Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:addons-153147 Clientid:01:52:54:00:b9:bf:db}
	I1201 19:06:20.142406   17783 main.go:143] libmachine: domain addons-153147 has defined IP address 192.168.39.9 and MAC address 52:54:00:b9:bf:db in network mk-addons-153147
	I1201 19:06:20.142605   17783 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/addons-153147/id_rsa Username:docker}
	I1201 19:06:20.756451   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.022772928s)
	I1201 19:06:20.756496   17783 addons.go:495] Verifying addon ingress=true in "addons-153147"
	I1201 19:06:20.756548   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.812328084s)
	I1201 19:06:20.756613   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.482950749s)
	I1201 19:06:20.756641   17783 addons.go:495] Verifying addon registry=true in "addons-153147"
	I1201 19:06:20.756729   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.203734541s)
	I1201 19:06:20.756752   17783 addons.go:495] Verifying addon metrics-server=true in "addons-153147"
	I1201 19:06:20.756799   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.081249156s)
	I1201 19:06:20.758453   17783 out.go:179] * Verifying ingress addon...
	I1201 19:06:20.758458   17783 out.go:179] * Verifying registry addon...
	I1201 19:06:20.759294   17783 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-153147 service yakd-dashboard -n yakd-dashboard
	
	I1201 19:06:20.760993   17783 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1201 19:06:20.761074   17783 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1201 19:06:20.812656   17783 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1201 19:06:20.812682   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:20.812698   17783 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1201 19:06:20.812709   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:20.945908   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.834868464s)
	W1201 19:06:20.945943   17783 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 19:06:20.945963   17783 retry.go:31] will retry after 317.591372ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1201 19:06:21.263726   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1201 19:06:21.280407   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:21.280730   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:21.770687   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:21.770690   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:22.114060   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.735074442s)
	I1201 19:06:22.114102   17783 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-153147"
	I1201 19:06:22.114117   17783 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.765792041s)
	I1201 19:06:22.114147   17783 api_server.go:72] duration metric: took 9.752045154s to wait for apiserver process to appear ...
	I1201 19:06:22.114156   17783 api_server.go:88] waiting for apiserver healthz status ...
	I1201 19:06:22.114175   17783 api_server.go:253] Checking apiserver healthz at https://192.168.39.9:8443/healthz ...
	I1201 19:06:22.114185   17783 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.97473627s)
	I1201 19:06:22.117548   17783 out.go:179] * Verifying csi-hostpath-driver addon...
	I1201 19:06:22.117582   17783 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1201 19:06:22.119693   17783 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1201 19:06:22.120393   17783 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1201 19:06:22.121135   17783 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1201 19:06:22.121154   17783 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1201 19:06:22.169513   17783 api_server.go:279] https://192.168.39.9:8443/healthz returned 200:
	ok
	I1201 19:06:22.170856   17783 api_server.go:141] control plane version: v1.34.2
	I1201 19:06:22.170891   17783 api_server.go:131] duration metric: took 56.726559ms to wait for apiserver health ...
	I1201 19:06:22.170904   17783 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 19:06:22.196484   17783 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1201 19:06:22.196513   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:22.196788   17783 system_pods.go:59] 20 kube-system pods found
	I1201 19:06:22.196817   17783 system_pods.go:61] "amd-gpu-device-plugin-nh9fh" [19ed7c27-42bf-429e-a659-5cab61a37789] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1201 19:06:22.196839   17783 system_pods.go:61] "coredns-66bc5c9577-7bgbb" [d82083d8-b7a2-4608-8b02-e6bbf9976482] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 19:06:22.196846   17783 system_pods.go:61] "coredns-66bc5c9577-qthgq" [1c971b48-0414-4686-9897-a70b10f42b2f] Running
	I1201 19:06:22.196852   17783 system_pods.go:61] "csi-hostpath-attacher-0" [a0da7e77-faf6-4065-9d43-305953b2e6e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:06:22.196857   17783 system_pods.go:61] "csi-hostpath-resizer-0" [e7a300b2-e469-4b5d-9ebc-f37fda2db088] Pending
	I1201 19:06:22.196862   17783 system_pods.go:61] "csi-hostpathplugin-x97sg" [11625919-d915-4098-abc3-6638f492f692] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 19:06:22.196867   17783 system_pods.go:61] "etcd-addons-153147" [fe1b4837-c595-4349-a8ec-771f6514e48d] Running
	I1201 19:06:22.196872   17783 system_pods.go:61] "kube-apiserver-addons-153147" [13a5d41f-f476-4996-a51b-61e6297cd643] Running
	I1201 19:06:22.196875   17783 system_pods.go:61] "kube-controller-manager-addons-153147" [66972fb7-9f43-4a64-babd-2a9ead11665a] Running
	I1201 19:06:22.196880   17783 system_pods.go:61] "kube-ingress-dns-minikube" [ada2334d-7448-402c-ba30-9ea15e6fe684] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:06:22.196884   17783 system_pods.go:61] "kube-proxy-9z5zn" [05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8] Running
	I1201 19:06:22.196887   17783 system_pods.go:61] "kube-scheduler-addons-153147" [acdfbb0f-99cf-44e1-b6fc-2157e5de13bb] Running
	I1201 19:06:22.196892   17783 system_pods.go:61] "metrics-server-85b7d694d7-r5qgp" [776145bf-6b03-48e3-bbd9-1460bb1d5b86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:06:22.196897   17783 system_pods.go:61] "nvidia-device-plugin-daemonset-rcdwp" [42b47333-4324-46b0-9473-d92effc8cb10] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 19:06:22.196906   17783 system_pods.go:61] "registry-6b586f9694-mfkdk" [11619fff-1af5-4b33-8893-bcb6ad33587c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:06:22.196912   17783 system_pods.go:61] "registry-creds-764b6fb674-xdgz5" [c4a135e2-6714-483d-92c9-5a727086d4c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:06:22.196917   17783 system_pods.go:61] "registry-proxy-pw4sl" [5755be46-29a3-4a7e-9349-89d5d6200020] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:06:22.196922   17783 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5ddbm" [6006f1a2-b8bd-4d10-9265-4313f7d610bd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:06:22.196931   17783 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wmd4x" [5f93acba-a273-49d3-ab26-c30d4f16d840] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:06:22.196936   17783 system_pods.go:61] "storage-provisioner" [366028de-640e-4307-982b-f015bfda82d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 19:06:22.196941   17783 system_pods.go:74] duration metric: took 26.030554ms to wait for pod list to return data ...
	I1201 19:06:22.196948   17783 default_sa.go:34] waiting for default service account to be created ...
	I1201 19:06:22.216209   17783 default_sa.go:45] found service account: "default"
	I1201 19:06:22.216244   17783 default_sa.go:55] duration metric: took 19.285956ms for default service account to be created ...
	I1201 19:06:22.216258   17783 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 19:06:22.221283   17783 system_pods.go:86] 20 kube-system pods found
	I1201 19:06:22.221322   17783 system_pods.go:89] "amd-gpu-device-plugin-nh9fh" [19ed7c27-42bf-429e-a659-5cab61a37789] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1201 19:06:22.221333   17783 system_pods.go:89] "coredns-66bc5c9577-7bgbb" [d82083d8-b7a2-4608-8b02-e6bbf9976482] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 19:06:22.221342   17783 system_pods.go:89] "coredns-66bc5c9577-qthgq" [1c971b48-0414-4686-9897-a70b10f42b2f] Running
	I1201 19:06:22.221350   17783 system_pods.go:89] "csi-hostpath-attacher-0" [a0da7e77-faf6-4065-9d43-305953b2e6e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1201 19:06:22.221357   17783 system_pods.go:89] "csi-hostpath-resizer-0" [e7a300b2-e469-4b5d-9ebc-f37fda2db088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1201 19:06:22.221365   17783 system_pods.go:89] "csi-hostpathplugin-x97sg" [11625919-d915-4098-abc3-6638f492f692] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1201 19:06:22.221374   17783 system_pods.go:89] "etcd-addons-153147" [fe1b4837-c595-4349-a8ec-771f6514e48d] Running
	I1201 19:06:22.221380   17783 system_pods.go:89] "kube-apiserver-addons-153147" [13a5d41f-f476-4996-a51b-61e6297cd643] Running
	I1201 19:06:22.221390   17783 system_pods.go:89] "kube-controller-manager-addons-153147" [66972fb7-9f43-4a64-babd-2a9ead11665a] Running
	I1201 19:06:22.221399   17783 system_pods.go:89] "kube-ingress-dns-minikube" [ada2334d-7448-402c-ba30-9ea15e6fe684] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1201 19:06:22.221407   17783 system_pods.go:89] "kube-proxy-9z5zn" [05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8] Running
	I1201 19:06:22.221414   17783 system_pods.go:89] "kube-scheduler-addons-153147" [acdfbb0f-99cf-44e1-b6fc-2157e5de13bb] Running
	I1201 19:06:22.221424   17783 system_pods.go:89] "metrics-server-85b7d694d7-r5qgp" [776145bf-6b03-48e3-bbd9-1460bb1d5b86] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1201 19:06:22.221434   17783 system_pods.go:89] "nvidia-device-plugin-daemonset-rcdwp" [42b47333-4324-46b0-9473-d92effc8cb10] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1201 19:06:22.221443   17783 system_pods.go:89] "registry-6b586f9694-mfkdk" [11619fff-1af5-4b33-8893-bcb6ad33587c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1201 19:06:22.221451   17783 system_pods.go:89] "registry-creds-764b6fb674-xdgz5" [c4a135e2-6714-483d-92c9-5a727086d4c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1201 19:06:22.221461   17783 system_pods.go:89] "registry-proxy-pw4sl" [5755be46-29a3-4a7e-9349-89d5d6200020] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1201 19:06:22.221469   17783 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5ddbm" [6006f1a2-b8bd-4d10-9265-4313f7d610bd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:06:22.221481   17783 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wmd4x" [5f93acba-a273-49d3-ab26-c30d4f16d840] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1201 19:06:22.221489   17783 system_pods.go:89] "storage-provisioner" [366028de-640e-4307-982b-f015bfda82d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 19:06:22.221499   17783 system_pods.go:126] duration metric: took 5.233511ms to wait for k8s-apps to be running ...
	I1201 19:06:22.221509   17783 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 19:06:22.221561   17783 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:06:22.268147   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:22.272023   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:22.275246   17783 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1201 19:06:22.275270   17783 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1201 19:06:22.399021   17783 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 19:06:22.399054   17783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1201 19:06:22.507609   17783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1201 19:06:22.640154   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:22.770079   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:22.773505   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:23.127614   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:23.267225   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:23.269218   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:23.317894   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.054130391s)
	I1201 19:06:23.317961   17783 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.096376466s)
	I1201 19:06:23.317987   17783 system_svc.go:56] duration metric: took 1.096474542s WaitForService to wait for kubelet
	I1201 19:06:23.318005   17783 kubeadm.go:587] duration metric: took 10.955900933s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 19:06:23.318032   17783 node_conditions.go:102] verifying NodePressure condition ...
	I1201 19:06:23.323700   17783 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1201 19:06:23.323747   17783 node_conditions.go:123] node cpu capacity is 2
	I1201 19:06:23.323766   17783 node_conditions.go:105] duration metric: took 5.726408ms to run NodePressure ...
	I1201 19:06:23.323783   17783 start.go:242] waiting for startup goroutines ...
	I1201 19:06:23.659462   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:23.756955   17783 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.249301029s)
	I1201 19:06:23.758001   17783 addons.go:495] Verifying addon gcp-auth=true in "addons-153147"
	I1201 19:06:23.760360   17783 out.go:179] * Verifying gcp-auth addon...
	I1201 19:06:23.762202   17783 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1201 19:06:23.855653   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:23.875077   17783 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1201 19:06:23.875098   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:23.876180   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:24.128007   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:24.268090   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:24.270239   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:24.271030   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:24.625807   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:24.767328   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:24.767434   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:24.770344   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:25.126481   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:25.269159   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:25.272474   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:25.278543   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:25.625121   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:25.766172   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:25.767211   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:25.767377   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:26.126146   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:26.269456   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:26.272054   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:26.272593   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:26.626423   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:26.765660   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:26.765821   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:26.768655   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:27.125149   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:27.265046   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:27.267029   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:27.267905   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:27.625265   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:27.764853   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:27.764959   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:27.766333   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:28.125652   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:28.264874   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:28.265390   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:28.266962   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:28.625270   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:28.764857   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:28.765122   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:28.767173   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:29.124108   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:29.268559   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:29.269667   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:29.270390   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:29.625267   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:29.769315   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:29.769680   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:29.769696   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:30.126329   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:30.265772   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:30.266144   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:30.266205   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:30.624860   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:30.855927   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:30.884528   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:30.885226   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:31.124516   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:31.265519   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:31.265708   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:31.265817   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:31.625402   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:31.766031   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:31.766475   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:31.767366   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:32.125701   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:32.266424   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:32.267197   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:32.268687   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:32.626935   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:32.766379   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:32.768322   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:32.769694   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:33.128045   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:33.270023   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:33.270180   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:33.272848   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:33.625759   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:33.769019   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:33.769062   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:33.769090   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:34.127723   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:34.268076   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:34.268332   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:34.268859   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:34.626159   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:34.772595   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:34.773151   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:34.773956   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:35.125728   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:35.266248   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:35.266881   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:35.267812   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:35.625752   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:35.766713   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:35.767414   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:35.767586   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:36.124966   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:36.266761   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:36.267694   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:36.268035   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:36.625176   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:36.765035   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:36.765459   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:36.766575   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:37.125608   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:37.265095   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:37.265794   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:37.266405   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:37.624614   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:37.766105   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:37.766300   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:37.767484   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:38.125006   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:38.266175   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:38.266409   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:38.266427   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:38.624416   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:38.765840   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:38.768056   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:38.770751   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:39.126371   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:39.271420   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:39.271569   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:39.275157   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:39.625407   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:39.765790   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:39.766001   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:39.767841   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:40.126918   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:40.266264   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:40.266328   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:40.267467   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:40.626096   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:40.791764   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:40.791836   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:40.792032   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:41.125497   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:41.266305   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:41.266405   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:41.266966   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:41.624060   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:41.765661   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:41.765678   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:41.766210   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:42.126162   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:42.265275   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:42.265404   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:42.265735   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:42.625024   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:42.769548   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:42.769651   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:42.770316   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:43.126782   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:43.269889   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:43.272280   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:43.273967   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:43.625817   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:43.768474   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:43.768482   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:43.768642   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:44.125123   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:44.264490   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1201 19:06:44.264691   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:44.266418   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:44.624440   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:44.766520   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:44.768371   17783 kapi.go:107] duration metric: took 24.00729829s to wait for kubernetes.io/minikube-addons=registry ...
	I1201 19:06:44.768870   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:45.125850   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:45.265700   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:45.267319   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:45.624612   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:45.766080   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:45.766298   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:46.127960   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:46.267861   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:46.270510   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:46.625240   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:46.766441   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:46.768323   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:47.125005   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:47.268559   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:47.269500   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:47.626975   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:47.767717   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:47.768326   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:48.274557   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:48.279482   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:48.279921   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:48.629888   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:48.767022   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:48.767358   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:49.132150   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:49.268026   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:49.269544   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:49.625283   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:49.767382   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:49.772948   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:50.125187   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:50.265548   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:50.266092   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:50.625614   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:50.765545   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:50.765613   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:51.124871   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:51.270137   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:51.270750   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:51.625764   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:51.770291   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:51.770373   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:52.124463   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:52.267899   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:52.269976   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:52.791541   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:52.791557   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:52.792645   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:53.125889   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:53.271998   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:53.272248   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:53.625600   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:53.767871   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:53.769313   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:54.125897   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:54.266814   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:54.267722   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:54.625889   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:54.771752   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:54.771962   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:55.126696   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:55.266203   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:55.270026   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:55.626932   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:55.772904   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:55.773272   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:56.125863   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:56.266391   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:56.268136   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:56.624018   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:56.764332   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:56.765953   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:57.125288   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:57.265476   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:57.266115   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:57.625362   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:57.765944   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:57.766817   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:58.125927   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:58.271880   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:58.272046   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:58.624019   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:58.767318   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:58.768349   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:59.125460   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:59.268641   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:59.268773   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:06:59.625076   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:06:59.764821   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:06:59.766993   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:00.125296   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:00.271661   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:00.273943   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:00.625396   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:00.841047   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:00.844385   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:01.127821   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:01.270196   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:01.271879   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:01.625521   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:01.767332   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:01.767701   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:02.125375   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:02.265962   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:02.267253   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:02.624264   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:02.764344   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:02.765402   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:03.131934   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:03.265794   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:03.265934   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:03.625003   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:03.764548   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:03.765919   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:04.125994   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:04.267307   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:04.268374   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:04.627394   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:04.766413   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:04.768274   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:05.128044   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:05.267320   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:05.271083   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:05.627153   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:05.765620   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:05.769445   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:06.124181   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:06.268303   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:06.269371   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:06.624078   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:06.765438   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:06.765585   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:07.178687   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:07.266663   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:07.267527   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:07.625645   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:07.766453   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:07.766487   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:08.126110   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:08.265105   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:08.268597   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:08.624905   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:08.765727   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:08.765842   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:09.124519   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:09.266227   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:09.266424   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:09.626036   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:09.772525   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:09.773457   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:10.125143   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:10.268532   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:10.269078   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:10.626108   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:10.768359   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:10.768821   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:11.130043   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:11.267773   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:11.269453   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:11.624614   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:11.766673   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:11.766755   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:12.126271   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:12.268837   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:12.271184   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:12.627121   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:12.766451   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:12.767961   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:13.126286   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:13.283005   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:13.284677   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:13.634136   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:13.767955   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:13.768591   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:14.124616   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:14.273434   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:14.286458   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:14.624931   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:14.768454   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:14.773171   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:15.127864   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:15.271298   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:15.272623   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:15.629055   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:15.766372   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:15.766891   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:16.124875   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:16.267302   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:16.267515   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:16.663587   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:16.880223   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:16.880716   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:17.125160   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:17.267817   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:17.268002   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:17.625094   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:17.764973   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:17.765134   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:18.126790   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:18.266608   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:18.268993   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:18.624711   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:18.766200   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:18.766357   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:19.124774   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:19.269057   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:19.271310   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:19.624440   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:19.765928   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:19.766756   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:20.124177   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:20.267862   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:20.268020   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:20.628920   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:20.766109   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:20.766313   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:21.123968   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:21.266084   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:21.267926   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:21.624435   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:21.767005   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:21.769907   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:22.125479   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:22.271110   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:22.271322   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:22.625945   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:22.775820   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:22.775934   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:23.125571   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:23.266646   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:23.271341   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:23.626035   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:23.768383   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:23.769734   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:24.124938   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:24.270021   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:24.270493   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:24.626674   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:24.770225   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:24.775590   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:25.128017   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:25.268625   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:25.268869   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:25.627486   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:25.765654   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:25.766591   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:26.125696   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:26.268231   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:26.268539   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:26.624679   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:26.769966   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:26.770399   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:27.125743   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:27.267558   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:27.268167   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:27.626387   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:27.770184   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:27.770782   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:28.201751   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:28.269146   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:28.269673   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:28.626008   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:28.766925   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:28.766977   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:29.127906   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:29.266166   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:29.266323   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:29.629741   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:29.767695   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:29.768743   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:30.126385   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:30.269583   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:30.271306   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:30.624539   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:31.032243   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:31.039379   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:31.127042   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:31.267660   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:31.270057   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:31.625206   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:31.766183   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:31.767503   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:32.123975   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:32.266050   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:32.267505   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:32.624675   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:32.774179   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:32.780747   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:33.125183   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:33.266371   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:33.267319   17783 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1201 19:07:33.626199   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:33.769148   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:33.769542   17783 kapi.go:107] duration metric: took 1m13.008548934s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1201 19:07:34.126643   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:34.371178   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:34.625558   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:34.766288   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:35.123951   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1201 19:07:35.266771   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:35.625479   17783 kapi.go:107] duration metric: took 1m13.505083555s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1201 19:07:35.765866   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:36.266556   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:36.767271   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:37.268422   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:37.767030   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:38.267324   17783 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1201 19:07:38.766652   17783 kapi.go:107] duration metric: took 1m15.004447005s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1201 19:07:38.768555   17783 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-153147 cluster.
	I1201 19:07:38.769920   17783 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1201 19:07:38.771306   17783 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1201 19:07:38.772741   17783 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, default-storageclass, registry-creds, storage-provisioner, inspektor-gadget, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1201 19:07:38.774215   17783 addons.go:530] duration metric: took 1m26.412047147s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner default-storageclass registry-creds storage-provisioner inspektor-gadget amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1201 19:07:38.774258   17783 start.go:247] waiting for cluster config update ...
	I1201 19:07:38.774283   17783 start.go:256] writing updated cluster config ...
	I1201 19:07:38.774569   17783 ssh_runner.go:195] Run: rm -f paused
	I1201 19:07:38.782048   17783 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 19:07:38.868080   17783 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qthgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:38.873747   17783 pod_ready.go:94] pod "coredns-66bc5c9577-qthgq" is "Ready"
	I1201 19:07:38.873775   17783 pod_ready.go:86] duration metric: took 5.659434ms for pod "coredns-66bc5c9577-qthgq" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:38.876276   17783 pod_ready.go:83] waiting for pod "etcd-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:38.881293   17783 pod_ready.go:94] pod "etcd-addons-153147" is "Ready"
	I1201 19:07:38.881309   17783 pod_ready.go:86] duration metric: took 5.015035ms for pod "etcd-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:38.883057   17783 pod_ready.go:83] waiting for pod "kube-apiserver-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:38.888335   17783 pod_ready.go:94] pod "kube-apiserver-addons-153147" is "Ready"
	I1201 19:07:38.888361   17783 pod_ready.go:86] duration metric: took 5.288202ms for pod "kube-apiserver-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:38.890446   17783 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:39.186871   17783 pod_ready.go:94] pod "kube-controller-manager-addons-153147" is "Ready"
	I1201 19:07:39.186901   17783 pod_ready.go:86] duration metric: took 296.434781ms for pod "kube-controller-manager-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:39.387052   17783 pod_ready.go:83] waiting for pod "kube-proxy-9z5zn" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:39.787200   17783 pod_ready.go:94] pod "kube-proxy-9z5zn" is "Ready"
	I1201 19:07:39.787239   17783 pod_ready.go:86] duration metric: took 400.160335ms for pod "kube-proxy-9z5zn" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:39.987769   17783 pod_ready.go:83] waiting for pod "kube-scheduler-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:40.387148   17783 pod_ready.go:94] pod "kube-scheduler-addons-153147" is "Ready"
	I1201 19:07:40.387177   17783 pod_ready.go:86] duration metric: took 399.374204ms for pod "kube-scheduler-addons-153147" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 19:07:40.387196   17783 pod_ready.go:40] duration metric: took 1.605112351s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 19:07:40.434089   17783 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 19:07:40.436319   17783 out.go:179] * Done! kubectl is now configured to use "addons-153147" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.535286970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a316f5eb-1e67-40f6-90c1-cc753769fdb6 name=/runtime.v1.RuntimeService/Version
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.536314115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccb4140f-f1c2-4d87-b7c2-da64d4c5b5be name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.537650730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764616249537622846,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccb4140f-f1c2-4d87-b7c2-da64d4c5b5be name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.538640248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d0cf697-c532-4728-ae81-03cdfd3e140c name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.538854385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d0cf697-c532-4728-ae81-03cdfd3e140c name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.539469818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dff7799c97435cd4d635804cb2ce271bd40ad1cd3f18edcf6046c2f1b2b63ec1,PodSandboxId:a3ecafa2ef89605a21e0cfb3a2a3663f1f11e20978dc92cf686899587f802c8c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764616107675426138,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4486d923-4013-47f9-8cd9-a81f1ddebd66,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e556ec8c41a707014e28ca7432e8a3cea76f365d0a642f3f5f529658529e05,PodSandboxId:509f2f394e11771149a60a24722adfac16e8b8b48f811577c51078edb908eeec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764616064872566136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c7cc93-0f51-443c-a999-402fe4c9076b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87c1009d204a1de506f5fd769f03040322ef0fc2612dce071e3cc43d1802bca,PodSandboxId:9263258d416914e7b977ee63ebbedfdbd942b69997e20d6b25cb37aa04480c96,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764616052731591227,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-j5gk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d71a8554-45d4-4d96-a11a-f3dd97666c64,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:66454cfa07aa182773e252bc453bf7eacd3db562d79ee157e4d4aba4ce93b9f6,PodSandboxId:b268ef6ce3eff24e23b760a6b43e42617fba7f4706069b9061842e2f8649b96f,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764616052586179570,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-slpzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4db4569d-df65-42e4-808a-cfe898d653c2,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7edac07552432ac51d44db6e90fc88c44d1bb2a846f4076c306e80ef691df6,PodSandboxId:799f078c1ac8798e0eafe6480c97ab9e59e40f23fad74397abcdef6174958f67,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764616033603465723,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8l42q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcd4f82d-09d7-45c1-b696-ba124b55f6da,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addccc37646a554def25bbd5bda9c133577ddbc7e024872ea0c4a7ec53fe7c9b,PodSandboxId:09440f74f7ddb2505ddd8ca93fc4ad0ea25b4c2e6f25ac588d87093d9af39a25,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764616013753578311,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada2334d-7448-402c-ba30-9ea15e6fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddcd05f5f9c16544cce2ca6e13a573a7d06cc799e4df0460b8b35221b96bc2d,PodSandboxId:9792fb4e64dde1847bba01cfe38915107e19398cb453e24067b8569a02047ade,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764615990010406867,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nh9fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed7c27-42bf-429e-a659-5cab61a37789,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd6c211af9d77e9446c6d73e364921bbba94647263e4c21fcabc93853307404,PodSandboxId:f125b2534c5c1b5dbaa103887d8ba86e851b05f21020e6d9e2496059cef74245,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764615980107437043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 366028de-640e-4307-982b-f015bfda82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da94bcf0ecf178c1849acd3c0ccd1cf809df4397be23b3f50fed5afaf49d3b,PodSandboxId:3fec75154cb91d729c1a32f602c79580a0369d103037645a47b19ec46c1d2557,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764615973035605383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qthgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c971b48-0414-4686-9897-a70b10f42b2f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdfe08daf700cc46cbce31a511280cd3e8431e0915795fa962406fa7bfb703f,PodSandboxId:515488c0643710af8511b8d09091d23c04bc1827e696ed5a6838803562887c7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764615972026087662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9z5zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e095ec070fd59c1502545f9404290e7b2310014f1838dc85f42e2ec9d71520,PodSandboxId:2ceab01349c6401fb618bfc795ee60b5eded868fd2704113b6846229b32726bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764615960365913364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53189a71631f236402671f457423c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b03450bd940018d595ff4bdb217255616ba7406499522b6f958ac6c5deaccb9c,PodSandboxId:4acefd3804f0268c4f71d992b8f6e2098b3252f328722cfa829cca14b771cdbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764615960378686318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be8929a3a21c147a11b04c6ddd818cb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb3f768ff899570f82f5fd37af6a8386f67ee3eef54aedc3896727a240e84c9,PodSandboxId:3907faecd946b05b5f7a93b7b53539328ec2e14e3e10aab05cf1911234ec06e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764615960342245559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f354084d95d2a2a9d7ac1e0e2f17a965,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bfca845ba7bee915e083e47d9b379fd72cfc886332e91ec435f41a7d475400,PodSandboxId:9bf75387a3b64ffc5422f8eaf5f650528df4434fe16cd8d6a276d4fbfe1e2ffe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764615960310184581,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153147,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 230b3557e2dadce65ee48646e716bd4c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d0cf697-c532-4728-ae81-03cdfd3e140c name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.573789000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd4a77da-981b-475c-8152-da9586d8ecd4 name=/runtime.v1.RuntimeService/Version
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.573892062Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd4a77da-981b-475c-8152-da9586d8ecd4 name=/runtime.v1.RuntimeService/Version
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.575458065Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=838edf4a-8707-4bcb-9ece-e5649c925b4d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.576856636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764616249576831567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=838edf4a-8707-4bcb-9ece-e5649c925b4d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.577757168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94f4c5c6-a4b1-474c-b553-97a560e44a0e name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.577894028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94f4c5c6-a4b1-474c-b553-97a560e44a0e name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.578858968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dff7799c97435cd4d635804cb2ce271bd40ad1cd3f18edcf6046c2f1b2b63ec1,PodSandboxId:a3ecafa2ef89605a21e0cfb3a2a3663f1f11e20978dc92cf686899587f802c8c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764616107675426138,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4486d923-4013-47f9-8cd9-a81f1ddebd66,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e556ec8c41a707014e28ca7432e8a3cea76f365d0a642f3f5f529658529e05,PodSandboxId:509f2f394e11771149a60a24722adfac16e8b8b48f811577c51078edb908eeec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764616064872566136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c7cc93-0f51-443c-a999-402fe4c9076b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87c1009d204a1de506f5fd769f03040322ef0fc2612dce071e3cc43d1802bca,PodSandboxId:9263258d416914e7b977ee63ebbedfdbd942b69997e20d6b25cb37aa04480c96,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764616052731591227,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-j5gk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d71a8554-45d4-4d96-a11a-f3dd97666c64,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:66454cfa07aa182773e252bc453bf7eacd3db562d79ee157e4d4aba4ce93b9f6,PodSandboxId:b268ef6ce3eff24e23b760a6b43e42617fba7f4706069b9061842e2f8649b96f,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764616052586179570,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-slpzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4db4569d-df65-42e4-808a-cfe898d653c2,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7edac07552432ac51d44db6e90fc88c44d1bb2a846f4076c306e80ef691df6,PodSandboxId:799f078c1ac8798e0eafe6480c97ab9e59e40f23fad74397abcdef6174958f67,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764616033603465723,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8l42q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcd4f82d-09d7-45c1-b696-ba124b55f6da,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addccc37646a554def25bbd5bda9c133577ddbc7e024872ea0c4a7ec53fe7c9b,PodSandboxId:09440f74f7ddb2505ddd8ca93fc4ad0ea25b4c2e6f25ac588d87093d9af39a25,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764616013753578311,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada2334d-7448-402c-ba30-9ea15e6fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddcd05f5f9c16544cce2ca6e13a573a7d06cc799e4df0460b8b35221b96bc2d,PodSandboxId:9792fb4e64dde1847bba01cfe38915107e19398cb453e24067b8569a02047ade,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764615990010406867,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nh9fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed7c27-42bf-429e-a659-5cab61a37789,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd6c211af9d77e9446c6d73e364921bbba94647263e4c21fcabc93853307404,PodSandboxId:f125b2534c5c1b5dbaa103887d8ba86e851b05f21020e6d9e2496059cef74245,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764615980107437043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 366028de-640e-4307-982b-f015bfda82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da94bcf0ecf178c1849acd3c0ccd1cf809df4397be23b3f50fed5afaf49d3b,PodSandboxId:3fec75154cb91d729c1a32f602c79580a0369d103037645a47b19ec46c1d2557,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764615973035605383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qthgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c971b48-0414-4686-9897-a70b10f42b2f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdfe08daf700cc46cbce31a511280cd3e8431e0915795fa962406fa7bfb703f,PodSandboxId:515488c0643710af8511b8d09091d23c04bc1827e696ed5a6838803562887c7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764615972026087662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9z5zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e095ec070fd59c1502545f9404290e7b2310014f1838dc85f42e2ec9d71520,PodSandboxId:2ceab01349c6401fb618bfc795ee60b5eded868fd2704113b6846229b32726bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764615960365913364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53189a71631f236402671f457423c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b03450bd940018d595ff4bdb217255616ba7406499522b6f958ac6c5deaccb9c,PodSandboxId:4acefd3804f0268c4f71d992b8f6e2098b3252f328722cfa829cca14b771cdbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764615960378686318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be8929a3a21c147a11b04c6ddd818cb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb3f768ff899570f82f5fd37af6a8386f67ee3eef54aedc3896727a240e84c9,PodSandboxId:3907faecd946b05b5f7a93b7b53539328ec2e14e3e10aab05cf1911234ec06e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764615960342245559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f354084d95d2a2a9d7ac1e0e2f17a965,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bfca845ba7bee915e083e47d9b379fd72cfc886332e91ec435f41a7d475400,PodSandboxId:9bf75387a3b64ffc5422f8eaf5f650528df4434fe16cd8d6a276d4fbfe1e2ffe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764615960310184581,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153147,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 230b3557e2dadce65ee48646e716bd4c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94f4c5c6-a4b1-474c-b553-97a560e44a0e name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.595114475Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2c704fc8-da7f-4e80-b2f5-efa87158939e name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.595921588Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:64a9cda6d80a5980de523d1b693a48aec3f4ea54fc83c74bb3f714c1952faf6e,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-bp2ws,Uid:3bf05b82-6c0e-4593-a9ca-a5ed936510a2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616248749436032,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-bp2ws,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:10:48.427671073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a3ecafa2ef89605a21e0cfb3a2a3663f1f11e20978dc92cf686899587f802c8c,Metadata:&PodSandboxMetadata{Name:nginx,Uid:4486d923-4013-47f9-8cd9-a81f1ddebd66,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1764616100409884707,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4486d923-4013-47f9-8cd9-a81f1ddebd66,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:08:19.701805241Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:509f2f394e11771149a60a24722adfac16e8b8b48f811577c51078edb908eeec,Metadata:&PodSandboxMetadata{Name:busybox,Uid:b2c7cc93-0f51-443c-a999-402fe4c9076b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616061352886089,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c7cc93-0f51-443c-a999-402fe4c9076b,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:07:41.032534617Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9263258d416914e7b977e
e63ebbedfdbd942b69997e20d6b25cb37aa04480c96,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-6c8bf45fb-j5gk6,Uid:d71a8554-45d4-4d96-a11a-f3dd97666c64,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616044827315531,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-j5gk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d71a8554-45d4-4d96-a11a-f3dd97666c64,pod-template-hash: 6c8bf45fb,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:20.603137804Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:799f078c1ac8798e0eafe6480c97ab9e59e40f23fad74397abcdef6174958f67,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-8l42q,Uid:fcd4f82d-09d7-45c1-b696-ba124b55f6da,Namespace:ingress-nginx,Attempt:0,},Stat
e:SANDBOX_NOTREADY,CreatedAt:1764615982062870200,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: ec966a3f-cd0e-4031-bdac-14f082abfed5,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: ec966a3f-cd0e-4031-bdac-14f082abfed5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-8l42q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcd4f82d-09d7-45c1-b696-ba124b55f6da,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:20.781669391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b268ef6ce3eff24e23b760a6b43e42617fba7f4706069b9061842e2f8649b96f,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-slpzw,Uid:4db4569d-df65-42e4-808a-cfe898d653c2,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1764615981367424020,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 6225b011-25e7-4162-9938-a08f4e103cc7,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 6225b011-25e7-4162-9938-a08f4e103cc7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-slpzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4db4569d-df65-42e4-808a-cfe898d653c2,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:20.838672171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f125b2534c5c1b5dbaa103887d8ba86e851b05f21020e6d9e2496059cef74245,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:366028de-640e-4307-982b-f015bfda82d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615978652607450,Labels:map[string]str
ing{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 366028de-640e-4307-982b-f015bfda82d0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/co
nfig.seen: 2025-12-01T19:06:18.304726006Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:09440f74f7ddb2505ddd8ca93fc4ad0ea25b4c2e6f25ac588d87093d9af39a25,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:ada2334d-7448-402c-ba30-9ea15e6fe684,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615978398137737,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada2334d-7448-402c-ba30-9ea15e6fe684,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"5
3\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-12-01T19:06:18.053043209Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9792fb4e64dde1847bba01cfe38915107e19398cb453e24067b8569a02047ade,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-nh9fh,Uid:19ed7c27-42bf-429e-a659-5cab61a37789,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:176461597573627545
7,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-nh9fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed7c27-42bf-429e-a659-5cab61a37789,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:15.398463902Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3fec75154cb91d729c1a32f602c79580a0369d103037645a47b19ec46c1d2557,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-qthgq,Uid:1c971b48-0414-4686-9897-a70b10f42b2f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615972203885620,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-qthgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c971b48-0414-4686-9897-a70b10f42b2f,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[strin
g]string{kubernetes.io/config.seen: 2025-12-01T19:06:11.873300050Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:515488c0643710af8511b8d09091d23c04bc1827e696ed5a6838803562887c7d,Metadata:&PodSandboxMetadata{Name:kube-proxy-9z5zn,Uid:05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615971907382537,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9z5zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:06:11.578255976Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4acefd3804f0268c4f71d992b8f6e2098b3252f328722cfa829cca14b771cdbe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-153147,Uid:9be8929a3a21c147a11b04c6ddd818cb,Namespace:kube-system,Attempt:0,},State:
SANDBOX_READY,CreatedAt:1764615960126392010,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be8929a3a21c147a11b04c6ddd818cb,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9be8929a3a21c147a11b04c6ddd818cb,kubernetes.io/config.seen: 2025-12-01T19:05:59.603211708Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3907faecd946b05b5f7a93b7b53539328ec2e14e3e10aab05cf1911234ec06e9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-153147,Uid:f354084d95d2a2a9d7ac1e0e2f17a965,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615960123332507,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f354084d95d2a2a9d7ac1e0e2f17a965,tier: control-plane,},
Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.9:8443,kubernetes.io/config.hash: f354084d95d2a2a9d7ac1e0e2f17a965,kubernetes.io/config.seen: 2025-12-01T19:05:59.603209508Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9bf75387a3b64ffc5422f8eaf5f650528df4434fe16cd8d6a276d4fbfe1e2ffe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-153147,Uid:230b3557e2dadce65ee48646e716bd4c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615960120705229,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 230b3557e2dadce65ee48646e716bd4c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 230b3557e2dadce65ee48646e716bd4c,kubernetes.io/config.seen: 2025-12-01T19:05:59.603210809Z,kubernetes.io/config.source: file,},Runtime
Handler:,},&PodSandbox{Id:2ceab01349c6401fb618bfc795ee60b5eded868fd2704113b6846229b32726bf,Metadata:&PodSandboxMetadata{Name:etcd-addons-153147,Uid:a53189a71631f236402671f457423c6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764615960120328417,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53189a71631f236402671f457423c6d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.9:2379,kubernetes.io/config.hash: a53189a71631f236402671f457423c6d,kubernetes.io/config.seen: 2025-12-01T19:05:59.603204988Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2c704fc8-da7f-4e80-b2f5-efa87158939e name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.597776607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ee24391-7879-4239-8662-a12478b047ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.597857906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ee24391-7879-4239-8662-a12478b047ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.598286714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dff7799c97435cd4d635804cb2ce271bd40ad1cd3f18edcf6046c2f1b2b63ec1,PodSandboxId:a3ecafa2ef89605a21e0cfb3a2a3663f1f11e20978dc92cf686899587f802c8c,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764616107675426138,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4486d923-4013-47f9-8cd9-a81f1ddebd66,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e556ec8c41a707014e28ca7432e8a3cea76f365d0a642f3f5f529658529e05,PodSandboxId:509f2f394e11771149a60a24722adfac16e8b8b48f811577c51078edb908eeec,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764616064872566136,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b2c7cc93-0f51-443c-a999-402fe4c9076b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87c1009d204a1de506f5fd769f03040322ef0fc2612dce071e3cc43d1802bca,PodSandboxId:9263258d416914e7b977ee63ebbedfdbd942b69997e20d6b25cb37aa04480c96,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764616052731591227,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-j5gk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d71a8554-45d4-4d96-a11a-f3dd97666c64,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:66454cfa07aa182773e252bc453bf7eacd3db562d79ee157e4d4aba4ce93b9f6,PodSandboxId:b268ef6ce3eff24e23b760a6b43e42617fba7f4706069b9061842e2f8649b96f,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1764616052586179570,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-slpzw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4db4569d-df65-42e4-808a-cfe898d653c2,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c7edac07552432ac51d44db6e90fc88c44d1bb2a846f4076c306e80ef691df6,PodSandboxId:799f078c1ac8798e0eafe6480c97ab9e59e40f23fad74397abcdef6174958f67,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764616033603465723,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-8l42q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fcd4f82d-09d7-45c1-b696-ba124b55f6da,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:addccc37646a554def25bbd5bda9c133577ddbc7e024872ea0c4a7ec53fe7c9b,PodSandboxId:09440f74f7ddb2505ddd8ca93fc4ad0ea25b4c2e6f25ac588d87093d9af39a25,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764616013753578311,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ada2334d-7448-402c-ba30-9ea15e6fe684,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eddcd05f5f9c16544cce2ca6e13a573a7d06cc799e4df0460b8b35221b96bc2d,PodSandboxId:9792fb4e64dde1847bba01cfe38915107e19398cb453e24067b8569a02047ade,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f38354
98cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764615990010406867,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-nh9fh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19ed7c27-42bf-429e-a659-5cab61a37789,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd6c211af9d77e9446c6d73e364921bbba94647263e4c21fcabc93853307404,PodSandboxId:f125b2534c5c1b5dbaa103887d8ba86e851b05f21020e6d9e2496059cef74245,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d
628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764615980107437043,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 366028de-640e-4307-982b-f015bfda82d0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83da94bcf0ecf178c1849acd3c0ccd1cf809df4397be23b3f50fed5afaf49d3b,PodSandboxId:3fec75154cb91d729c1a32f602c79580a0369d103037645a47b19ec46c1d2557,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764615973035605383,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-qthgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c971b48-0414-4686-9897-a70b10f42b2f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cdfe08daf700cc46cbce31a511280cd3e8431e0915795fa962406fa7bfb703f,PodSandboxId:515488c0643710af8511b8d09091d23c04bc1827e696ed5a6838803562887c7d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764615972026087662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9z5zn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05f6dd4e-50d1-437b-b0f6-8f7f30ef91f8,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49e095ec070fd59c1502545f9404290e7b2310014f1838dc85f42e2ec9d71520,PodSandboxId:2ceab01349c6401fb618bfc795ee60b5eded868fd2704113b6846229b32726bf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764615960365913364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a53189a71631f236402671f457423c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b03450bd940018d595ff4bdb217255616ba7406499522b6f958ac6c5deaccb9c,PodSandboxId:4acefd3804f0268c4f71d992b8f6e2098b3252f328722cfa829cca14b771cdbe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764615960378686318,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9be8929a3a21c147a11b04c6ddd818cb,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb3f768ff899570f82f5fd37af6a8386f67ee3eef54aedc3896727a240e84c9,PodSandboxId:3907faecd946b05b5f7a93b7b53539328ec2e14e3e10aab05cf1911234ec06e9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764615960342245559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153147,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f354084d95d2a2a9d7ac1e0e2f17a965,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bfca845ba7bee915e083e47d9b379fd72cfc886332e91ec435f41a7d475400,PodSandboxId:9bf75387a3b64ffc5422f8eaf5f650528df4434fe16cd8d6a276d4fbfe1e2ffe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764615960310184581,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-153147,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 230b3557e2dadce65ee48646e716bd4c,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ee24391-7879-4239-8662-a12478b047ef name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.600382424Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,},},}" file="otel-collector/interceptors.go:62" id=a0c2d304-0e90-4fc9-be1d-a395ac5ecbd4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.601650463Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:64a9cda6d80a5980de523d1b693a48aec3f4ea54fc83c74bb3f714c1952faf6e,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-bp2ws,Uid:3bf05b82-6c0e-4593-a9ca-a5ed936510a2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616248749436032,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-bp2ws,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-01T19:10:48.427671073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a0c2d304-0e90-4fc9-be1d-a395ac5ecbd4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.603271387Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:64a9cda6d80a5980de523d1b693a48aec3f4ea54fc83c74bb3f714c1952faf6e,Verbose:false,}" file="otel-collector/interceptors.go:62" id=fc03c35b-dfe7-4b53-9adc-73e80ab85c69 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.603405769Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:64a9cda6d80a5980de523d1b693a48aec3f4ea54fc83c74bb3f714c1952faf6e,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-bp2ws,Uid:3bf05b82-6c0e-4593-a9ca-a5ed936510a2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1764616248749436032,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:&UserNamespace{Mode:NODE,Uids:[]*IDMapping{},Gids:[]*IDMapping{},},},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-bp2ws,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen:
2025-12-01T19:10:48.427671073Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=fc03c35b-dfe7-4b53-9adc-73e80ab85c69 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.604803078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 3bf05b82-6c0e-4593-a9ca-a5ed936510a2,},},}" file="otel-collector/interceptors.go:62" id=1da3b1b0-2331-42ac-8fc5-6adc708be008 name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.604858514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1da3b1b0-2331-42ac-8fc5-6adc708be008 name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 19:10:49 addons-153147 crio[816]: time="2025-12-01 19:10:49.604921066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1da3b1b0-2331-42ac-8fc5-6adc708be008 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	dff7799c97435       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   a3ecafa2ef896       nginx                                      default
	f2e556ec8c41a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   509f2f394e117       busybox                                    default
	d87c1009d204a       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   9263258d41691       ingress-nginx-controller-6c8bf45fb-j5gk6   ingress-nginx
	66454cfa07aa1       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                             3 minutes ago       Exited              patch                     2                   b268ef6ce3eff       ingress-nginx-admission-patch-slpzw        ingress-nginx
	2c7edac075524       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   799f078c1ac87       ingress-nginx-admission-create-8l42q       ingress-nginx
	addccc37646a5       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   09440f74f7ddb       kube-ingress-dns-minikube                  kube-system
	eddcd05f5f9c1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   9792fb4e64dde       amd-gpu-device-plugin-nh9fh                kube-system
	0bd6c211af9d7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   f125b2534c5c1       storage-provisioner                        kube-system
	83da94bcf0ecf       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   3fec75154cb91       coredns-66bc5c9577-qthgq                   kube-system
	8cdfe08daf700       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   515488c064371       kube-proxy-9z5zn                           kube-system
	b03450bd94001       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   4acefd3804f02       kube-scheduler-addons-153147               kube-system
	49e095ec070fd       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   2ceab01349c64       etcd-addons-153147                         kube-system
	0eb3f768ff899       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   3907faecd946b       kube-apiserver-addons-153147               kube-system
	15bfca845ba7b       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   9bf75387a3b64       kube-controller-manager-addons-153147      kube-system
	
	
	==> coredns [83da94bcf0ecf178c1849acd3c0ccd1cf809df4397be23b3f50fed5afaf49d3b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39060 - 34052 "HINFO IN 1330166463351145051.8371010972066214296. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025057212s
	[INFO] 10.244.0.23:37019 - 44053 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00045101s
	[INFO] 10.244.0.23:35276 - 12720 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00135739s
	[INFO] 10.244.0.23:60577 - 18978 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113032s
	[INFO] 10.244.0.23:46315 - 41230 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000872525s
	[INFO] 10.244.0.23:49657 - 35489 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000414109s
	[INFO] 10.244.0.23:51289 - 11755 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000540474s
	[INFO] 10.244.0.23:38509 - 16973 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.00140446s
	[INFO] 10.244.0.23:50173 - 52410 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001122543s
	[INFO] 10.244.0.26:48423 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000604409s
	[INFO] 10.244.0.26:60129 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00023625s
	
	
	==> describe nodes <==
	Name:               addons-153147
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-153147
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=addons-153147
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T19_06_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-153147
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 19:06:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-153147
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 19:10:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 19:09:10 +0000   Mon, 01 Dec 2025 19:06:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 19:09:10 +0000   Mon, 01 Dec 2025 19:06:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 19:09:10 +0000   Mon, 01 Dec 2025 19:06:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 19:09:10 +0000   Mon, 01 Dec 2025 19:06:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    addons-153147
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 b210d02d07be413197b5bb937549f8ab
	  System UUID:                b210d02d-07be-4131-97b5-bb937549f8ab
	  Boot ID:                    21e491b2-8bd2-497b-9210-febc088453e1
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02.8
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-world-app-5d498dc89-bp2ws             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-j5gk6    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m29s
	  kube-system                 amd-gpu-device-plugin-nh9fh                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 coredns-66bc5c9577-qthgq                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m38s
	  kube-system                 etcd-addons-153147                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m44s
	  kube-system                 kube-apiserver-addons-153147                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-controller-manager-addons-153147       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-proxy-9z5zn                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-scheduler-addons-153147                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m37s  kube-proxy       
	  Normal  Starting                 4m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m43s  kubelet          Node addons-153147 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s  kubelet          Node addons-153147 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s  kubelet          Node addons-153147 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m42s  kubelet          Node addons-153147 status is now: NodeReady
	  Normal  RegisteredNode           4m39s  node-controller  Node addons-153147 event: Registered Node addons-153147 in Controller
	
	
	==> dmesg <==
	[  +0.115056] kauditd_printk_skb: 321 callbacks suppressed
	[  +1.527686] kauditd_printk_skb: 353 callbacks suppressed
	[  +8.661666] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.857554] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.945283] kauditd_printk_skb: 5 callbacks suppressed
	[Dec 1 19:07] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.426219] kauditd_printk_skb: 86 callbacks suppressed
	[  +6.138416] kauditd_printk_skb: 56 callbacks suppressed
	[  +3.567492] kauditd_printk_skb: 86 callbacks suppressed
	[  +0.000042] kauditd_printk_skb: 126 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 44 callbacks suppressed
	[  +1.051192] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.000099] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.086961] kauditd_printk_skb: 47 callbacks suppressed
	[Dec 1 19:08] kauditd_printk_skb: 22 callbacks suppressed
	[  +1.633881] kauditd_printk_skb: 149 callbacks suppressed
	[  +0.853585] kauditd_printk_skb: 153 callbacks suppressed
	[  +3.861415] kauditd_printk_skb: 125 callbacks suppressed
	[  +1.862602] kauditd_printk_skb: 114 callbacks suppressed
	[  +5.208541] kauditd_printk_skb: 46 callbacks suppressed
	[  +8.218312] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.664871] kauditd_printk_skb: 10 callbacks suppressed
	[Dec 1 19:09] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.832307] kauditd_printk_skb: 5 callbacks suppressed
	[Dec 1 19:10] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [49e095ec070fd59c1502545f9404290e7b2310014f1838dc85f42e2ec9d71520] <==
	{"level":"info","ts":"2025-12-01T19:07:16.867514Z","caller":"traceutil/trace.go:172","msg":"trace[1667234423] transaction","detail":"{read_only:false; response_revision:1045; number_of_response:1; }","duration":"216.478317ms","start":"2025-12-01T19:07:16.651030Z","end":"2025-12-01T19:07:16.867508Z","steps":["trace[1667234423] 'process raft request'  (duration: 211.118928ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T19:07:16.867629Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.109871ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:16.867647Z","caller":"traceutil/trace.go:172","msg":"trace[1163621068] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1045; }","duration":"106.129025ms","start":"2025-12-01T19:07:16.761513Z","end":"2025-12-01T19:07:16.867642Z","steps":["trace[1163621068] 'agreement among raft nodes before linearized reading'  (duration: 106.068856ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:28.190837Z","caller":"traceutil/trace.go:172","msg":"trace[580058826] transaction","detail":"{read_only:false; response_revision:1120; number_of_response:1; }","duration":"168.378845ms","start":"2025-12-01T19:07:28.021910Z","end":"2025-12-01T19:07:28.190289Z","steps":["trace[580058826] 'process raft request'  (duration: 162.727533ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:29.546132Z","caller":"traceutil/trace.go:172","msg":"trace[1898381605] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1156; }","duration":"101.329381ms","start":"2025-12-01T19:07:29.444785Z","end":"2025-12-01T19:07:29.546114Z","steps":["trace[1898381605] 'read index received'  (duration: 101.325052ms)","trace[1898381605] 'applied index is now lower than readState.Index'  (duration: 3.782µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-01T19:07:29.546289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.488993ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:29.546310Z","caller":"traceutil/trace.go:172","msg":"trace[300433651] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:1123; }","duration":"101.523736ms","start":"2025-12-01T19:07:29.444780Z","end":"2025-12-01T19:07:29.546304Z","steps":["trace[300433651] 'agreement among raft nodes before linearized reading'  (duration: 101.433073ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:29.547269Z","caller":"traceutil/trace.go:172","msg":"trace[1752219314] transaction","detail":"{read_only:false; response_revision:1124; number_of_response:1; }","duration":"107.896601ms","start":"2025-12-01T19:07:29.439363Z","end":"2025-12-01T19:07:29.547259Z","steps":["trace[1752219314] 'process raft request'  (duration: 107.067035ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:31.026447Z","caller":"traceutil/trace.go:172","msg":"trace[1390253838] linearizableReadLoop","detail":"{readStateIndex:1158; appliedIndex:1158; }","duration":"266.222026ms","start":"2025-12-01T19:07:30.760209Z","end":"2025-12-01T19:07:31.026431Z","steps":["trace[1390253838] 'read index received'  (duration: 266.216954ms)","trace[1390253838] 'applied index is now lower than readState.Index'  (duration: 4.468µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-01T19:07:31.026545Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.320452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:31.026562Z","caller":"traceutil/trace.go:172","msg":"trace[1367812758] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1125; }","duration":"266.351144ms","start":"2025-12-01T19:07:30.760206Z","end":"2025-12-01T19:07:31.026557Z","steps":["trace[1367812758] 'agreement among raft nodes before linearized reading'  (duration: 266.295058ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T19:07:31.031237Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"268.34602ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:31.031995Z","caller":"traceutil/trace.go:172","msg":"trace[360982683] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1125; }","duration":"269.106518ms","start":"2025-12-01T19:07:30.762872Z","end":"2025-12-01T19:07:31.031978Z","steps":["trace[360982683] 'agreement among raft nodes before linearized reading'  (duration: 268.324982ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T19:07:31.032395Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.373985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:31.032433Z","caller":"traceutil/trace.go:172","msg":"trace[1215511720] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:1125; }","duration":"138.419366ms","start":"2025-12-01T19:07:30.894003Z","end":"2025-12-01T19:07:31.032422Z","steps":["trace[1215511720] 'agreement among raft nodes before linearized reading'  (duration: 138.35243ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:34.354066Z","caller":"traceutil/trace.go:172","msg":"trace[791098058] linearizableReadLoop","detail":"{readStateIndex:1179; appliedIndex:1179; }","duration":"133.783596ms","start":"2025-12-01T19:07:34.220263Z","end":"2025-12-01T19:07:34.354046Z","steps":["trace[791098058] 'read index received'  (duration: 133.777623ms)","trace[791098058] 'applied index is now lower than readState.Index'  (duration: 5.114µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-01T19:07:34.354352Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.069309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-01T19:07:34.354564Z","caller":"traceutil/trace.go:172","msg":"trace[723472346] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1145; }","duration":"134.282105ms","start":"2025-12-01T19:07:34.220259Z","end":"2025-12-01T19:07:34.354541Z","steps":["trace[723472346] 'agreement among raft nodes before linearized reading'  (duration: 134.00585ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:34.354395Z","caller":"traceutil/trace.go:172","msg":"trace[1910775046] transaction","detail":"{read_only:false; response_revision:1146; number_of_response:1; }","duration":"145.211146ms","start":"2025-12-01T19:07:34.209174Z","end":"2025-12-01T19:07:34.354385Z","steps":["trace[1910775046] 'process raft request'  (duration: 145.130137ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:07:34.357251Z","caller":"traceutil/trace.go:172","msg":"trace[543788620] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"108.395309ms","start":"2025-12-01T19:07:34.248846Z","end":"2025-12-01T19:07:34.357242Z","steps":["trace[543788620] 'process raft request'  (duration: 108.347939ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:08:18.965694Z","caller":"traceutil/trace.go:172","msg":"trace[75931164] transaction","detail":"{read_only:false; response_revision:1469; number_of_response:1; }","duration":"145.955141ms","start":"2025-12-01T19:08:18.819712Z","end":"2025-12-01T19:08:18.965667Z","steps":["trace[75931164] 'process raft request'  (duration: 145.87455ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-01T19:08:20.056740Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.003872ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-12-01T19:08:20.056793Z","caller":"traceutil/trace.go:172","msg":"trace[1192414740] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1496; }","duration":"139.085402ms","start":"2025-12-01T19:08:19.917697Z","end":"2025-12-01T19:08:20.056783Z","steps":["trace[1192414740] 'range keys from in-memory index tree'  (duration: 138.933386ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:08:38.304778Z","caller":"traceutil/trace.go:172","msg":"trace[629939612] transaction","detail":"{read_only:false; response_revision:1616; number_of_response:1; }","duration":"146.512493ms","start":"2025-12-01T19:08:38.158250Z","end":"2025-12-01T19:08:38.304763Z","steps":["trace[629939612] 'process raft request'  (duration: 146.432108ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-01T19:08:39.672835Z","caller":"traceutil/trace.go:172","msg":"trace[286552751] transaction","detail":"{read_only:false; response_revision:1629; number_of_response:1; }","duration":"119.034778ms","start":"2025-12-01T19:08:39.553787Z","end":"2025-12-01T19:08:39.672822Z","steps":["trace[286552751] 'process raft request'  (duration: 118.907382ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:10:49 up 5 min,  0 users,  load average: 0.53, 0.93, 0.47
	Linux addons-153147 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  1 18:07:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02.8"
	
	
	==> kube-apiserver [0eb3f768ff899570f82f5fd37af6a8386f67ee3eef54aedc3896727a240e84c9] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1201 19:07:03.121094       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1201 19:07:03.131091       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1201 19:07:52.224250       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37488: use of closed network connection
	E1201 19:07:52.420671       1 conn.go:339] Error on socket receive: read tcp 192.168.39.9:8443->192.168.39.1:37502: use of closed network connection
	I1201 19:08:13.512336       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.220.6"}
	I1201 19:08:19.521299       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1201 19:08:19.746536       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.104.51"}
	E1201 19:08:41.346916       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1201 19:08:47.200909       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1201 19:09:04.106001       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1201 19:09:09.778242       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1201 19:09:09.778367       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1201 19:09:09.814457       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1201 19:09:09.827820       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1201 19:09:09.858119       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1201 19:09:09.858223       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1201 19:09:09.885919       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1201 19:09:09.885997       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1201 19:09:10.829608       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1201 19:09:10.886036       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1201 19:09:10.952022       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1201 19:09:11.286433       1 watch.go:272] "Unhandled Error" err="client disconnected" logger="UnhandledError"
	I1201 19:10:48.488144       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.132.7"}
	
	
	==> kube-controller-manager [15bfca845ba7bee915e083e47d9b379fd72cfc886332e91ec435f41a7d475400] <==
	E1201 19:09:14.976494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:09:17.825249       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:09:17.826857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:09:18.026580       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:09:18.027787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:09:19.621705       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:09:19.622817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:09:27.786467       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:09:27.787725       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:09:28.318665       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:09:28.319659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:09:29.499618       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:09:29.500829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:09:41.994157       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:09:41.995199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:09:43.925543       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:09:43.926502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:09:47.807247       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:09:47.808290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:10:20.001424       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:10:20.002597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:10:23.345825       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:10:23.346828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1201 19:10:28.900396       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1201 19:10:28.901506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [8cdfe08daf700cc46cbce31a511280cd3e8431e0915795fa962406fa7bfb703f] <==
	I1201 19:06:12.217269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 19:06:12.319351       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 19:06:12.319429       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.9"]
	E1201 19:06:12.319504       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 19:06:12.489229       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1201 19:06:12.489294       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1201 19:06:12.489322       1 server_linux.go:132] "Using iptables Proxier"
	I1201 19:06:12.529641       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 19:06:12.529983       1 server.go:527] "Version info" version="v1.34.2"
	I1201 19:06:12.529997       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 19:06:12.531024       1 config.go:106] "Starting endpoint slice config controller"
	I1201 19:06:12.531036       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 19:06:12.540526       1 config.go:200] "Starting service config controller"
	I1201 19:06:12.540558       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 19:06:12.540897       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 19:06:12.540905       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 19:06:12.555077       1 config.go:309] "Starting node config controller"
	I1201 19:06:12.555106       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 19:06:12.555114       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 19:06:12.633101       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1201 19:06:12.641432       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 19:06:12.641450       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b03450bd940018d595ff4bdb217255616ba7406499522b6f958ac6c5deaccb9c] <==
	E1201 19:06:03.723111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 19:06:03.723159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 19:06:03.723229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 19:06:03.723276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 19:06:03.723345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 19:06:03.723550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 19:06:03.723669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 19:06:03.723827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 19:06:03.724218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 19:06:03.724328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 19:06:04.675237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 19:06:04.679007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 19:06:04.703870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 19:06:04.745006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 19:06:04.761308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 19:06:04.824404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 19:06:04.852930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 19:06:04.887095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1201 19:06:04.944071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 19:06:04.964853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 19:06:04.968251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 19:06:04.992075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 19:06:05.040878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 19:06:05.074210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1201 19:06:07.797713       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 19:09:13 addons-153147 kubelet[1509]: E1201 19:09:13.020084    1509 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3\": container with ID starting with f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3 not found: ID does not exist" containerID="f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3"
	Dec 01 19:09:13 addons-153147 kubelet[1509]: I1201 19:09:13.020127    1509 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3"} err="failed to get container status \"f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3\": rpc error: code = NotFound desc = could not find container \"f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3\": container with ID starting with f922107af7ad9d3370bb307907eeae32f69accbdf1a450177580ee8d3d894eb3 not found: ID does not exist"
	Dec 01 19:09:16 addons-153147 kubelet[1509]: E1201 19:09:16.976699    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616156974881278 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:09:16 addons-153147 kubelet[1509]: E1201 19:09:16.976741    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616156974881278 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:09:26 addons-153147 kubelet[1509]: E1201 19:09:26.979227    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616166978657703 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:09:26 addons-153147 kubelet[1509]: E1201 19:09:26.979253    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616166978657703 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:09:36 addons-153147 kubelet[1509]: E1201 19:09:36.983514    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616176982796946 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:09:36 addons-153147 kubelet[1509]: E1201 19:09:36.983557    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616176982796946 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:09:46 addons-153147 kubelet[1509]: E1201 19:09:46.987477    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616186986784654 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:09:46 addons-153147 kubelet[1509]: E1201 19:09:46.987527    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616186986784654 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:09:56 addons-153147 kubelet[1509]: E1201 19:09:56.990205    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616196989609845 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:09:56 addons-153147 kubelet[1509]: E1201 19:09:56.990232    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616196989609845 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:01 addons-153147 kubelet[1509]: I1201 19:10:01.324593    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-nh9fh" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:10:05 addons-153147 kubelet[1509]: I1201 19:10:05.324487    1509 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 01 19:10:06 addons-153147 kubelet[1509]: E1201 19:10:06.993495    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616206993004647 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:06 addons-153147 kubelet[1509]: E1201 19:10:06.993570    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616206993004647 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:16 addons-153147 kubelet[1509]: E1201 19:10:16.996062    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616216995575510 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:16 addons-153147 kubelet[1509]: E1201 19:10:16.996089    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616216995575510 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:26 addons-153147 kubelet[1509]: E1201 19:10:26.999280    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616226998792023 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:26 addons-153147 kubelet[1509]: E1201 19:10:26.999309    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616226998792023 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:37 addons-153147 kubelet[1509]: E1201 19:10:37.002998    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616237002038910 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:37 addons-153147 kubelet[1509]: E1201 19:10:37.003032    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616237002038910 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:47 addons-153147 kubelet[1509]: E1201 19:10:47.005763    1509 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764616247005317858 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:47 addons-153147 kubelet[1509]: E1201 19:10:47.005806    1509 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764616247005317858 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 01 19:10:48 addons-153147 kubelet[1509]: I1201 19:10:48.504343    1509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxz7w\" (UniqueName: \"kubernetes.io/projected/3bf05b82-6c0e-4593-a9ca-a5ed936510a2-kube-api-access-nxz7w\") pod \"hello-world-app-5d498dc89-bp2ws\" (UID: \"3bf05b82-6c0e-4593-a9ca-a5ed936510a2\") " pod="default/hello-world-app-5d498dc89-bp2ws"
	
	
	==> storage-provisioner [0bd6c211af9d77e9446c6d73e364921bbba94647263e4c21fcabc93853307404] <==
	W1201 19:10:24.658346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:26.661466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:26.666628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:28.670459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:28.677008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:30.681313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:30.690114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:32.693711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:32.702520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:34.706656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:34.712194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:36.716064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:36.721118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:38.725030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:38.733925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:40.736825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:40.742586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:42.746235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:42.751240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:44.755144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:44.760380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:46.763841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:46.769406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:48.776744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1201 19:10:48.792270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-153147 -n addons-153147
helpers_test.go:269: (dbg) Run:  kubectl --context addons-153147 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-bp2ws ingress-nginx-admission-create-8l42q ingress-nginx-admission-patch-slpzw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-153147 describe pod hello-world-app-5d498dc89-bp2ws ingress-nginx-admission-create-8l42q ingress-nginx-admission-patch-slpzw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-153147 describe pod hello-world-app-5d498dc89-bp2ws ingress-nginx-admission-create-8l42q ingress-nginx-admission-patch-slpzw: exit status 1 (83.794844ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-bp2ws
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-153147/192.168.39.9
	Start Time:       Mon, 01 Dec 2025 19:10:48 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nxz7w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nxz7w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-bp2ws to addons-153147
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8l42q" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-slpzw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-153147 describe pod hello-world-app-5d498dc89-bp2ws ingress-nginx-admission-create-8l42q ingress-nginx-admission-patch-slpzw: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 addons disable ingress-dns --alsologtostderr -v=1: (1.033739436s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 addons disable ingress --alsologtostderr -v=1: (7.781734517s)
--- FAIL: TestAddons/parallel/Ingress (160.29s)

                                                
                                    
x
+
TestPreload (145.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-245765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1201 20:03:28.702953   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-245765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m29.89576852s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-245765 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-245765 image pull gcr.io/k8s-minikube/busybox: (3.657376274s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-245765
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-245765: (6.856849898s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-245765 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1201 20:05:25.634091   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-245765 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (42.187906264s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-245765 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-01 20:05:38.190979824 +0000 UTC m=+3646.608878097
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-245765 -n test-preload-245765
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-245765 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-685862 ssh -n multinode-685862-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:52 UTC │ 01 Dec 25 19:52 UTC │
	│ ssh     │ multinode-685862 ssh -n multinode-685862 sudo cat /home/docker/cp-test_multinode-685862-m03_multinode-685862.txt                                          │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:52 UTC │ 01 Dec 25 19:52 UTC │
	│ cp      │ multinode-685862 cp multinode-685862-m03:/home/docker/cp-test.txt multinode-685862-m02:/home/docker/cp-test_multinode-685862-m03_multinode-685862-m02.txt │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:52 UTC │ 01 Dec 25 19:52 UTC │
	│ ssh     │ multinode-685862 ssh -n multinode-685862-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:52 UTC │ 01 Dec 25 19:52 UTC │
	│ ssh     │ multinode-685862 ssh -n multinode-685862-m02 sudo cat /home/docker/cp-test_multinode-685862-m03_multinode-685862-m02.txt                                  │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:52 UTC │ 01 Dec 25 19:52 UTC │
	│ node    │ multinode-685862 node stop m03                                                                                                                            │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:52 UTC │ 01 Dec 25 19:52 UTC │
	│ node    │ multinode-685862 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:52 UTC │ 01 Dec 25 19:53 UTC │
	│ node    │ list -p multinode-685862                                                                                                                                  │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:53 UTC │                     │
	│ stop    │ -p multinode-685862                                                                                                                                       │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:53 UTC │ 01 Dec 25 19:56 UTC │
	│ start   │ -p multinode-685862 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:56 UTC │ 01 Dec 25 19:58 UTC │
	│ node    │ list -p multinode-685862                                                                                                                                  │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:58 UTC │                     │
	│ node    │ multinode-685862 node delete m03                                                                                                                          │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:58 UTC │ 01 Dec 25 19:58 UTC │
	│ stop    │ multinode-685862 stop                                                                                                                                     │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 19:58 UTC │ 01 Dec 25 20:01 UTC │
	│ start   │ -p multinode-685862 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 20:01 UTC │ 01 Dec 25 20:02 UTC │
	│ node    │ list -p multinode-685862                                                                                                                                  │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 20:02 UTC │                     │
	│ start   │ -p multinode-685862-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-685862-m02 │ jenkins │ v1.37.0 │ 01 Dec 25 20:02 UTC │                     │
	│ start   │ -p multinode-685862-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-685862-m03 │ jenkins │ v1.37.0 │ 01 Dec 25 20:02 UTC │ 01 Dec 25 20:03 UTC │
	│ node    │ add -p multinode-685862                                                                                                                                   │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 20:03 UTC │                     │
	│ delete  │ -p multinode-685862-m03                                                                                                                                   │ multinode-685862-m03 │ jenkins │ v1.37.0 │ 01 Dec 25 20:03 UTC │ 01 Dec 25 20:03 UTC │
	│ delete  │ -p multinode-685862                                                                                                                                       │ multinode-685862     │ jenkins │ v1.37.0 │ 01 Dec 25 20:03 UTC │ 01 Dec 25 20:03 UTC │
	│ start   │ -p test-preload-245765 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-245765  │ jenkins │ v1.37.0 │ 01 Dec 25 20:03 UTC │ 01 Dec 25 20:04 UTC │
	│ image   │ test-preload-245765 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-245765  │ jenkins │ v1.37.0 │ 01 Dec 25 20:04 UTC │ 01 Dec 25 20:04 UTC │
	│ stop    │ -p test-preload-245765                                                                                                                                    │ test-preload-245765  │ jenkins │ v1.37.0 │ 01 Dec 25 20:04 UTC │ 01 Dec 25 20:04 UTC │
	│ start   │ -p test-preload-245765 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-245765  │ jenkins │ v1.37.0 │ 01 Dec 25 20:04 UTC │ 01 Dec 25 20:05 UTC │
	│ image   │ test-preload-245765 image list                                                                                                                            │ test-preload-245765  │ jenkins │ v1.37.0 │ 01 Dec 25 20:05 UTC │ 01 Dec 25 20:05 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:04:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:04:55.857351   43679 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:04:55.857579   43679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:04:55.857588   43679 out.go:374] Setting ErrFile to fd 2...
	I1201 20:04:55.857592   43679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:04:55.857814   43679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 20:04:55.858235   43679 out.go:368] Setting JSON to false
	I1201 20:04:55.859085   43679 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6439,"bootTime":1764613057,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:04:55.859195   43679 start.go:143] virtualization: kvm guest
	I1201 20:04:55.861457   43679 out.go:179] * [test-preload-245765] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:04:55.863166   43679 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:04:55.863165   43679 notify.go:221] Checking for updates...
	I1201 20:04:55.866385   43679 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:04:55.867843   43679 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 20:04:55.869313   43679 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 20:04:55.871015   43679 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:04:55.872425   43679 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:04:55.874210   43679 config.go:182] Loaded profile config "test-preload-245765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:04:55.874674   43679 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:04:55.909377   43679 out.go:179] * Using the kvm2 driver based on existing profile
	I1201 20:04:55.910648   43679 start.go:309] selected driver: kvm2
	I1201 20:04:55.910666   43679 start.go:927] validating driver "kvm2" against &{Name:test-preload-245765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-245765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:04:55.910840   43679 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:04:55.911932   43679 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:04:55.911963   43679 cni.go:84] Creating CNI manager for ""
	I1201 20:04:55.912035   43679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:04:55.912094   43679 start.go:353] cluster config:
	{Name:test-preload-245765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-245765 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:04:55.912187   43679 iso.go:125] acquiring lock: {Name:mk6a50ce57553a723db22dad35f70cd00228e9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:04:55.913855   43679 out.go:179] * Starting "test-preload-245765" primary control-plane node in "test-preload-245765" cluster
	I1201 20:04:55.915183   43679 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:04:55.915223   43679 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:04:55.915237   43679 cache.go:65] Caching tarball of preloaded images
	I1201 20:04:55.915325   43679 preload.go:238] Found /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:04:55.915341   43679 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:04:55.915437   43679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/config.json ...
	I1201 20:04:55.915638   43679 start.go:360] acquireMachinesLock for test-preload-245765: {Name:mka5785482004af70e425c1e38474157ff061d66 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 20:04:55.915683   43679 start.go:364] duration metric: took 27.379µs to acquireMachinesLock for "test-preload-245765"
	I1201 20:04:55.915701   43679 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:04:55.915709   43679 fix.go:54] fixHost starting: 
	I1201 20:04:55.917528   43679 fix.go:112] recreateIfNeeded on test-preload-245765: state=Stopped err=<nil>
	W1201 20:04:55.917547   43679 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:04:55.919189   43679 out.go:252] * Restarting existing kvm2 VM for "test-preload-245765" ...
	I1201 20:04:55.919220   43679 main.go:143] libmachine: starting domain...
	I1201 20:04:55.919228   43679 main.go:143] libmachine: ensuring networks are active...
	I1201 20:04:55.919974   43679 main.go:143] libmachine: Ensuring network default is active
	I1201 20:04:55.920267   43679 main.go:143] libmachine: Ensuring network mk-test-preload-245765 is active
	I1201 20:04:55.920638   43679 main.go:143] libmachine: getting domain XML...
	I1201 20:04:55.921845   43679 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-245765</name>
	  <uuid>df84803b-5ed5-482c-86c5-4cec4f0e7b13</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/test-preload-245765/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/test-preload-245765/test-preload-245765.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b8:76:51'/>
	      <source network='mk-test-preload-245765'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:3d:27:9d'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1201 20:04:57.217193   43679 main.go:143] libmachine: waiting for domain to start...
	I1201 20:04:57.219196   43679 main.go:143] libmachine: domain is now running
	I1201 20:04:57.219238   43679 main.go:143] libmachine: waiting for IP...
	I1201 20:04:57.220260   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:04:57.221438   43679 main.go:143] libmachine: domain test-preload-245765 has current primary IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:04:57.221458   43679 main.go:143] libmachine: found domain IP: 192.168.39.215
	I1201 20:04:57.221467   43679 main.go:143] libmachine: reserving static IP address...
	I1201 20:04:57.222056   43679 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-245765", mac: "52:54:00:b8:76:51", ip: "192.168.39.215"} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:03:30 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:04:57.222093   43679 main.go:143] libmachine: skip adding static IP to network mk-test-preload-245765 - found existing host DHCP lease matching {name: "test-preload-245765", mac: "52:54:00:b8:76:51", ip: "192.168.39.215"}
	I1201 20:04:57.222116   43679 main.go:143] libmachine: reserved static IP address 192.168.39.215 for domain test-preload-245765
	I1201 20:04:57.222135   43679 main.go:143] libmachine: waiting for SSH...
	I1201 20:04:57.222148   43679 main.go:143] libmachine: Getting to WaitForSSH function...
	I1201 20:04:57.225093   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:04:57.225781   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:03:30 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:04:57.225846   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:04:57.226059   43679 main.go:143] libmachine: Using SSH client type: native
	I1201 20:04:57.226353   43679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1201 20:04:57.226371   43679 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1201 20:05:00.324133   43679 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.215:22: connect: no route to host
	I1201 20:05:06.404339   43679 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.215:22: connect: no route to host
	I1201 20:05:09.522911   43679 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:05:09.526844   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.527334   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:09.527364   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.527591   43679 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/config.json ...
	I1201 20:05:09.527849   43679 machine.go:94] provisionDockerMachine start ...
	I1201 20:05:09.530161   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.530486   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:09.530508   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.530656   43679 main.go:143] libmachine: Using SSH client type: native
	I1201 20:05:09.530904   43679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1201 20:05:09.530918   43679 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:05:09.647146   43679 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1201 20:05:09.647173   43679 buildroot.go:166] provisioning hostname "test-preload-245765"
	I1201 20:05:09.650325   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.650777   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:09.650802   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.650968   43679 main.go:143] libmachine: Using SSH client type: native
	I1201 20:05:09.651168   43679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1201 20:05:09.651179   43679 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-245765 && echo "test-preload-245765" | sudo tee /etc/hostname
	I1201 20:05:09.784178   43679 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-245765
	
	I1201 20:05:09.786925   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.787289   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:09.787314   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.787475   43679 main.go:143] libmachine: Using SSH client type: native
	I1201 20:05:09.787807   43679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1201 20:05:09.787866   43679 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-245765' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-245765/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-245765' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:05:09.912618   43679 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:05:09.912643   43679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12903/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12903/.minikube}
	I1201 20:05:09.912681   43679 buildroot.go:174] setting up certificates
	I1201 20:05:09.912691   43679 provision.go:84] configureAuth start
	I1201 20:05:09.915806   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.916293   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:09.916323   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.918671   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.919204   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:09.919229   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:09.919435   43679 provision.go:143] copyHostCerts
	I1201 20:05:09.919485   43679 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem, removing ...
	I1201 20:05:09.919493   43679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem
	I1201 20:05:09.919572   43679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem (1078 bytes)
	I1201 20:05:09.919704   43679 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem, removing ...
	I1201 20:05:09.919715   43679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem
	I1201 20:05:09.919755   43679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem (1123 bytes)
	I1201 20:05:09.919896   43679 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem, removing ...
	I1201 20:05:09.919908   43679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem
	I1201 20:05:09.919940   43679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem (1675 bytes)
	I1201 20:05:09.920089   43679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem org=jenkins.test-preload-245765 san=[127.0.0.1 192.168.39.215 localhost minikube test-preload-245765]
	I1201 20:05:10.070157   43679 provision.go:177] copyRemoteCerts
	I1201 20:05:10.070210   43679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:05:10.072796   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.073176   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:10.073213   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.073367   43679 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/test-preload-245765/id_rsa Username:docker}
	I1201 20:05:10.162081   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1201 20:05:10.193113   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1201 20:05:10.225723   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:05:10.257540   43679 provision.go:87] duration metric: took 344.827843ms to configureAuth
	I1201 20:05:10.257568   43679 buildroot.go:189] setting minikube options for container-runtime
	I1201 20:05:10.257728   43679 config.go:182] Loaded profile config "test-preload-245765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:05:10.260944   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.261348   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:10.261366   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.261531   43679 main.go:143] libmachine: Using SSH client type: native
	I1201 20:05:10.261726   43679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1201 20:05:10.261742   43679 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:05:10.519273   43679 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:05:10.519303   43679 machine.go:97] duration metric: took 991.437101ms to provisionDockerMachine
	I1201 20:05:10.519319   43679 start.go:293] postStartSetup for "test-preload-245765" (driver="kvm2")
	I1201 20:05:10.519331   43679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:05:10.519404   43679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:05:10.522873   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.523382   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:10.523409   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.523742   43679 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/test-preload-245765/id_rsa Username:docker}
	I1201 20:05:10.614346   43679 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:05:10.619671   43679 info.go:137] Remote host: Buildroot 2025.02.8
	I1201 20:05:10.619697   43679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/addons for local assets ...
	I1201 20:05:10.619773   43679 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/files for local assets ...
	I1201 20:05:10.619880   43679 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem -> 168682.pem in /etc/ssl/certs
	I1201 20:05:10.619965   43679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:05:10.632136   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:05:10.664413   43679 start.go:296] duration metric: took 145.0787ms for postStartSetup
	I1201 20:05:10.664479   43679 fix.go:56] duration metric: took 14.748742705s for fixHost
	I1201 20:05:10.667323   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.667691   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:10.667709   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.667926   43679 main.go:143] libmachine: Using SSH client type: native
	I1201 20:05:10.668116   43679 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I1201 20:05:10.668127   43679 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1201 20:05:10.786422   43679 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764619510.747704109
	
	I1201 20:05:10.786446   43679 fix.go:216] guest clock: 1764619510.747704109
	I1201 20:05:10.786456   43679 fix.go:229] Guest: 2025-12-01 20:05:10.747704109 +0000 UTC Remote: 2025-12-01 20:05:10.664486993 +0000 UTC m=+14.854375272 (delta=83.217116ms)
	I1201 20:05:10.786476   43679 fix.go:200] guest clock delta is within tolerance: 83.217116ms
	I1201 20:05:10.786482   43679 start.go:83] releasing machines lock for "test-preload-245765", held for 14.870788449s
	I1201 20:05:10.789219   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.789690   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:10.789721   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.790266   43679 ssh_runner.go:195] Run: cat /version.json
	I1201 20:05:10.790326   43679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:05:10.793181   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.793505   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.793738   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:10.793787   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.793983   43679 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/test-preload-245765/id_rsa Username:docker}
	I1201 20:05:10.794175   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:10.794219   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:10.794424   43679 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/test-preload-245765/id_rsa Username:docker}
	I1201 20:05:10.916319   43679 ssh_runner.go:195] Run: systemctl --version
	I1201 20:05:10.922997   43679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:05:11.069903   43679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:05:11.076847   43679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:05:11.076952   43679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:05:11.099116   43679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1201 20:05:11.099141   43679 start.go:496] detecting cgroup driver to use...
	I1201 20:05:11.099216   43679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:05:11.119654   43679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:05:11.138104   43679 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:05:11.138165   43679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:05:11.156947   43679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:05:11.175556   43679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:05:11.314966   43679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:05:11.527610   43679 docker.go:234] disabling docker service ...
	I1201 20:05:11.527668   43679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:05:11.543655   43679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:05:11.558589   43679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:05:11.719356   43679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:05:11.858850   43679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:05:11.875211   43679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:05:11.898920   43679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:05:11.898998   43679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:05:11.911529   43679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:05:11.911597   43679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:05:11.924658   43679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:05:11.937895   43679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:05:11.951261   43679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:05:11.964769   43679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:05:11.977866   43679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:05:11.999480   43679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:05:12.012747   43679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:05:12.024312   43679 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1201 20:05:12.024384   43679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1201 20:05:12.044182   43679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:05:12.056591   43679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:05:12.195117   43679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:05:12.301383   43679 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:05:12.301471   43679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:05:12.306955   43679 start.go:564] Will wait 60s for crictl version
	I1201 20:05:12.307022   43679 ssh_runner.go:195] Run: which crictl
	I1201 20:05:12.311183   43679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1201 20:05:12.347973   43679 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1201 20:05:12.348068   43679 ssh_runner.go:195] Run: crio --version
	I1201 20:05:12.377736   43679 ssh_runner.go:195] Run: crio --version
	I1201 20:05:12.409680   43679 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1201 20:05:12.413260   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:12.413599   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:12.413626   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:12.413795   43679 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1201 20:05:12.418357   43679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:05:12.434102   43679 kubeadm.go:884] updating cluster {Name:test-preload-245765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-245765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:05:12.434213   43679 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:05:12.434266   43679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:05:12.466821   43679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1201 20:05:12.466902   43679 ssh_runner.go:195] Run: which lz4
	I1201 20:05:12.471263   43679 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1201 20:05:12.476517   43679 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1201 20:05:12.476554   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1201 20:05:13.734946   43679 crio.go:462] duration metric: took 1.263712905s to copy over tarball
	I1201 20:05:13.735028   43679 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1201 20:05:15.206123   43679 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.471067023s)
	I1201 20:05:15.206158   43679 crio.go:469] duration metric: took 1.471176343s to extract the tarball
	I1201 20:05:15.206166   43679 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1201 20:05:15.243780   43679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:05:15.291811   43679 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:05:15.291847   43679 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:05:15.291855   43679 kubeadm.go:935] updating node { 192.168.39.215 8443 v1.34.2 crio true true} ...
	I1201 20:05:15.291943   43679 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-245765 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-245765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:05:15.292007   43679 ssh_runner.go:195] Run: crio config
	I1201 20:05:15.342183   43679 cni.go:84] Creating CNI manager for ""
	I1201 20:05:15.342208   43679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:05:15.342224   43679 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:05:15.342249   43679 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-245765 NodeName:test-preload-245765 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:05:15.342388   43679 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-245765"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.215"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:05:15.342473   43679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:05:15.354909   43679 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:05:15.354995   43679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:05:15.367619   43679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1201 20:05:15.388607   43679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:05:15.410167   43679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1201 20:05:15.431451   43679 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I1201 20:05:15.435741   43679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:05:15.451373   43679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:05:15.592433   43679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:05:15.623073   43679 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765 for IP: 192.168.39.215
	I1201 20:05:15.623102   43679 certs.go:195] generating shared ca certs ...
	I1201 20:05:15.623123   43679 certs.go:227] acquiring lock for ca certs: {Name:mk7e1ff47c53decb016970932c61ce60ac92f0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:05:15.623325   43679 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key
	I1201 20:05:15.623413   43679 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key
	I1201 20:05:15.623430   43679 certs.go:257] generating profile certs ...
	I1201 20:05:15.623547   43679 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/client.key
	I1201 20:05:15.623644   43679 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/apiserver.key.ea97d094
	I1201 20:05:15.623712   43679 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/proxy-client.key
	I1201 20:05:15.623895   43679 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem (1338 bytes)
	W1201 20:05:15.623947   43679 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868_empty.pem, impossibly tiny 0 bytes
	I1201 20:05:15.623962   43679 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:05:15.624001   43679 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem (1078 bytes)
	I1201 20:05:15.624037   43679 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:05:15.624078   43679 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem (1675 bytes)
	I1201 20:05:15.624146   43679 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:05:15.624978   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:05:15.666653   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:05:15.702122   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:05:15.732623   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:05:15.763980   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1201 20:05:15.794253   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1201 20:05:15.825141   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:05:15.856263   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1201 20:05:15.887160   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem --> /usr/share/ca-certificates/16868.pem (1338 bytes)
	I1201 20:05:15.917642   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /usr/share/ca-certificates/168682.pem (1708 bytes)
	I1201 20:05:15.947798   43679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:05:15.978148   43679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:05:15.999466   43679 ssh_runner.go:195] Run: openssl version
	I1201 20:05:16.006618   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168682.pem && ln -fs /usr/share/ca-certificates/168682.pem /etc/ssl/certs/168682.pem"
	I1201 20:05:16.020379   43679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168682.pem
	I1201 20:05:16.025723   43679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:16 /usr/share/ca-certificates/168682.pem
	I1201 20:05:16.025796   43679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168682.pem
	I1201 20:05:16.032844   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168682.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:05:16.046332   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:05:16.059924   43679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:05:16.065674   43679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:05:16.065734   43679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:05:16.072937   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:05:16.086049   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16868.pem && ln -fs /usr/share/ca-certificates/16868.pem /etc/ssl/certs/16868.pem"
	I1201 20:05:16.099771   43679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16868.pem
	I1201 20:05:16.104951   43679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:16 /usr/share/ca-certificates/16868.pem
	I1201 20:05:16.105012   43679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16868.pem
	I1201 20:05:16.112354   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16868.pem /etc/ssl/certs/51391683.0"
	I1201 20:05:16.125758   43679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:05:16.131584   43679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:05:16.139167   43679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:05:16.146955   43679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:05:16.154468   43679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:05:16.162100   43679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:05:16.169687   43679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:05:16.177276   43679 kubeadm.go:401] StartCluster: {Name:test-preload-245765 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-245765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:05:16.177365   43679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:05:16.177409   43679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:05:16.209349   43679 cri.go:89] found id: ""
	I1201 20:05:16.209424   43679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:05:16.221759   43679 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1201 20:05:16.221785   43679 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1201 20:05:16.221847   43679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1201 20:05:16.233353   43679 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1201 20:05:16.233760   43679 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-245765" does not appear in /home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 20:05:16.233904   43679 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-12903/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-245765" cluster setting kubeconfig missing "test-preload-245765" context setting]
	I1201 20:05:16.234210   43679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/kubeconfig: {Name:mkf67691ba90fcc0b34f838eaae92a26f4e31096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:05:16.234673   43679 kapi.go:59] client config for test-preload-245765: &rest.Config{Host:"https://192.168.39.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/client.key", CAFile:"/home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 20:05:16.235114   43679 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1201 20:05:16.235136   43679 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1201 20:05:16.235144   43679 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1201 20:05:16.235150   43679 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1201 20:05:16.235155   43679 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1201 20:05:16.235481   43679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1201 20:05:16.247708   43679 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.215
	I1201 20:05:16.247742   43679 kubeadm.go:1161] stopping kube-system containers ...
	I1201 20:05:16.247756   43679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1201 20:05:16.247805   43679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:05:16.282773   43679 cri.go:89] found id: ""
	I1201 20:05:16.282870   43679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1201 20:05:16.307548   43679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:05:16.319636   43679 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:05:16.319663   43679 kubeadm.go:158] found existing configuration files:
	
	I1201 20:05:16.319717   43679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1201 20:05:16.331076   43679 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:05:16.331150   43679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:05:16.342847   43679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1201 20:05:16.353648   43679 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:05:16.353705   43679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:05:16.365991   43679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1201 20:05:16.376968   43679 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:05:16.377026   43679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:05:16.389354   43679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1201 20:05:16.400195   43679 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:05:16.400259   43679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:05:16.412865   43679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:05:16.424731   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:05:16.481715   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:05:17.710503   43679 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.228748749s)
	I1201 20:05:17.710583   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:05:17.951621   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:05:18.022391   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:05:18.094937   43679 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:05:18.095033   43679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:05:18.595088   43679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:05:19.095626   43679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:05:19.596108   43679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:05:20.096089   43679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:05:20.595373   43679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:05:20.622274   43679 api_server.go:72] duration metric: took 2.527353427s to wait for apiserver process to appear ...
	I1201 20:05:20.622307   43679 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:05:20.622327   43679 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1201 20:05:23.172847   43679 api_server.go:279] https://192.168.39.215:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:05:23.172880   43679 api_server.go:103] status: https://192.168.39.215:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:05:23.172895   43679 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1201 20:05:23.219359   43679 api_server.go:279] https://192.168.39.215:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1201 20:05:23.219389   43679 api_server.go:103] status: https://192.168.39.215:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1201 20:05:23.623027   43679 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1201 20:05:23.628592   43679 api_server.go:279] https://192.168.39.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:05:23.628620   43679 api_server.go:103] status: https://192.168.39.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:05:24.123352   43679 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1201 20:05:24.130819   43679 api_server.go:279] https://192.168.39.215:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1201 20:05:24.130864   43679 api_server.go:103] status: https://192.168.39.215:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1201 20:05:24.623170   43679 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1201 20:05:24.632096   43679 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I1201 20:05:24.640461   43679 api_server.go:141] control plane version: v1.34.2
	I1201 20:05:24.640505   43679 api_server.go:131] duration metric: took 4.018189658s to wait for apiserver health ...
	I1201 20:05:24.640525   43679 cni.go:84] Creating CNI manager for ""
	I1201 20:05:24.640535   43679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:05:24.642403   43679 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1201 20:05:24.643890   43679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1201 20:05:24.679241   43679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1201 20:05:24.713088   43679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:05:24.718013   43679 system_pods.go:59] 7 kube-system pods found
	I1201 20:05:24.718076   43679 system_pods.go:61] "coredns-66bc5c9577-4k2vv" [2981768c-9197-48dd-a3e9-4e07027c8910] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:05:24.718088   43679 system_pods.go:61] "etcd-test-preload-245765" [8bdb29c4-9640-4b86-a6fa-4ad122c1261f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:05:24.718099   43679 system_pods.go:61] "kube-apiserver-test-preload-245765" [a99ad8f1-f152-4491-b84b-1c4dfebbd0f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:05:24.718110   43679 system_pods.go:61] "kube-controller-manager-test-preload-245765" [2b3d1672-1883-4e52-8830-8ac0bba5f553] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:05:24.718124   43679 system_pods.go:61] "kube-proxy-rbqdv" [5db6f825-edf2-4b48-b7e9-da262f703712] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1201 20:05:24.718133   43679 system_pods.go:61] "kube-scheduler-test-preload-245765" [2b4ed0d1-b6c9-490c-854c-234f55dff007] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:05:24.718143   43679 system_pods.go:61] "storage-provisioner" [55bec97f-a2a0-48f1-8328-6ecb349209b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1201 20:05:24.718154   43679 system_pods.go:74] duration metric: took 5.047459ms to wait for pod list to return data ...
	I1201 20:05:24.718165   43679 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:05:24.723638   43679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1201 20:05:24.723677   43679 node_conditions.go:123] node cpu capacity is 2
	I1201 20:05:24.723694   43679 node_conditions.go:105] duration metric: took 5.518764ms to run NodePressure ...
	I1201 20:05:24.723770   43679 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1201 20:05:25.010732   43679 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1201 20:05:25.015680   43679 kubeadm.go:744] kubelet initialised
	I1201 20:05:25.015720   43679 kubeadm.go:745] duration metric: took 4.959239ms waiting for restarted kubelet to initialise ...
	I1201 20:05:25.015743   43679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1201 20:05:25.033991   43679 ops.go:34] apiserver oom_adj: -16
	I1201 20:05:25.034013   43679 kubeadm.go:602] duration metric: took 8.812220708s to restartPrimaryControlPlane
	I1201 20:05:25.034021   43679 kubeadm.go:403] duration metric: took 8.856754137s to StartCluster
	I1201 20:05:25.034036   43679 settings.go:142] acquiring lock: {Name:mk63d3c798c3f817a653e3e39f757c57080fff76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:05:25.034104   43679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 20:05:25.034605   43679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/kubeconfig: {Name:mkf67691ba90fcc0b34f838eaae92a26f4e31096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:05:25.034804   43679 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:05:25.034915   43679 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1201 20:05:25.035011   43679 addons.go:70] Setting storage-provisioner=true in profile "test-preload-245765"
	I1201 20:05:25.035015   43679 config.go:182] Loaded profile config "test-preload-245765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:05:25.035029   43679 addons.go:239] Setting addon storage-provisioner=true in "test-preload-245765"
	W1201 20:05:25.035040   43679 addons.go:248] addon storage-provisioner should already be in state true
	I1201 20:05:25.035043   43679 addons.go:70] Setting default-storageclass=true in profile "test-preload-245765"
	I1201 20:05:25.035092   43679 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-245765"
	I1201 20:05:25.035066   43679 host.go:66] Checking if "test-preload-245765" exists ...
	I1201 20:05:25.036334   43679 out.go:179] * Verifying Kubernetes components...
	I1201 20:05:25.037518   43679 kapi.go:59] client config for test-preload-245765: &rest.Config{Host:"https://192.168.39.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/client.key", CAFile:"/home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 20:05:25.037630   43679 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1201 20:05:25.037664   43679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:05:25.037855   43679 addons.go:239] Setting addon default-storageclass=true in "test-preload-245765"
	W1201 20:05:25.037874   43679 addons.go:248] addon default-storageclass should already be in state true
	I1201 20:05:25.037897   43679 host.go:66] Checking if "test-preload-245765" exists ...
	I1201 20:05:25.038959   43679 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:05:25.038975   43679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1201 20:05:25.039566   43679 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1201 20:05:25.039578   43679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1201 20:05:25.041649   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:25.042096   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:25.042120   43679 main.go:143] libmachine: domain test-preload-245765 has defined MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:25.042131   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:25.042352   43679 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/test-preload-245765/id_rsa Username:docker}
	I1201 20:05:25.042606   43679 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b8:76:51", ip: ""} in network mk-test-preload-245765: {Iface:virbr1 ExpiryTime:2025-12-01 21:05:07 +0000 UTC Type:0 Mac:52:54:00:b8:76:51 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:test-preload-245765 Clientid:01:52:54:00:b8:76:51}
	I1201 20:05:25.042629   43679 main.go:143] libmachine: domain test-preload-245765 has defined IP address 192.168.39.215 and MAC address 52:54:00:b8:76:51 in network mk-test-preload-245765
	I1201 20:05:25.042777   43679 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/test-preload-245765/id_rsa Username:docker}
	I1201 20:05:25.263470   43679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:05:25.287352   43679 node_ready.go:35] waiting up to 6m0s for node "test-preload-245765" to be "Ready" ...
	I1201 20:05:25.291073   43679 node_ready.go:49] node "test-preload-245765" is "Ready"
	I1201 20:05:25.291110   43679 node_ready.go:38] duration metric: took 3.702058ms for node "test-preload-245765" to be "Ready" ...
	I1201 20:05:25.291130   43679 api_server.go:52] waiting for apiserver process to appear ...
	I1201 20:05:25.291194   43679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 20:05:25.323710   43679 api_server.go:72] duration metric: took 288.858516ms to wait for apiserver process to appear ...
	I1201 20:05:25.323745   43679 api_server.go:88] waiting for apiserver healthz status ...
	I1201 20:05:25.323770   43679 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I1201 20:05:25.331746   43679 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I1201 20:05:25.333624   43679 api_server.go:141] control plane version: v1.34.2
	I1201 20:05:25.333645   43679 api_server.go:131] duration metric: took 9.893004ms to wait for apiserver health ...
	I1201 20:05:25.333654   43679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1201 20:05:25.337565   43679 system_pods.go:59] 7 kube-system pods found
	I1201 20:05:25.337591   43679 system_pods.go:61] "coredns-66bc5c9577-4k2vv" [2981768c-9197-48dd-a3e9-4e07027c8910] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:05:25.337597   43679 system_pods.go:61] "etcd-test-preload-245765" [8bdb29c4-9640-4b86-a6fa-4ad122c1261f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:05:25.337604   43679 system_pods.go:61] "kube-apiserver-test-preload-245765" [a99ad8f1-f152-4491-b84b-1c4dfebbd0f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:05:25.337609   43679 system_pods.go:61] "kube-controller-manager-test-preload-245765" [2b3d1672-1883-4e52-8830-8ac0bba5f553] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:05:25.337615   43679 system_pods.go:61] "kube-proxy-rbqdv" [5db6f825-edf2-4b48-b7e9-da262f703712] Running
	I1201 20:05:25.337623   43679 system_pods.go:61] "kube-scheduler-test-preload-245765" [2b4ed0d1-b6c9-490c-854c-234f55dff007] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:05:25.337628   43679 system_pods.go:61] "storage-provisioner" [55bec97f-a2a0-48f1-8328-6ecb349209b6] Running
	I1201 20:05:25.337636   43679 system_pods.go:74] duration metric: took 3.976458ms to wait for pod list to return data ...
	I1201 20:05:25.337645   43679 default_sa.go:34] waiting for default service account to be created ...
	I1201 20:05:25.349919   43679 default_sa.go:45] found service account: "default"
	I1201 20:05:25.349951   43679 default_sa.go:55] duration metric: took 12.298466ms for default service account to be created ...
	I1201 20:05:25.349963   43679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1201 20:05:25.353657   43679 system_pods.go:86] 7 kube-system pods found
	I1201 20:05:25.353684   43679 system_pods.go:89] "coredns-66bc5c9577-4k2vv" [2981768c-9197-48dd-a3e9-4e07027c8910] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1201 20:05:25.353692   43679 system_pods.go:89] "etcd-test-preload-245765" [8bdb29c4-9640-4b86-a6fa-4ad122c1261f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1201 20:05:25.353701   43679 system_pods.go:89] "kube-apiserver-test-preload-245765" [a99ad8f1-f152-4491-b84b-1c4dfebbd0f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1201 20:05:25.353708   43679 system_pods.go:89] "kube-controller-manager-test-preload-245765" [2b3d1672-1883-4e52-8830-8ac0bba5f553] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1201 20:05:25.353719   43679 system_pods.go:89] "kube-proxy-rbqdv" [5db6f825-edf2-4b48-b7e9-da262f703712] Running
	I1201 20:05:25.353724   43679 system_pods.go:89] "kube-scheduler-test-preload-245765" [2b4ed0d1-b6c9-490c-854c-234f55dff007] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1201 20:05:25.353728   43679 system_pods.go:89] "storage-provisioner" [55bec97f-a2a0-48f1-8328-6ecb349209b6] Running
	I1201 20:05:25.353735   43679 system_pods.go:126] duration metric: took 3.765888ms to wait for k8s-apps to be running ...
	I1201 20:05:25.353745   43679 system_svc.go:44] waiting for kubelet service to be running ....
	I1201 20:05:25.353787   43679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 20:05:25.373335   43679 system_svc.go:56] duration metric: took 19.58192ms WaitForService to wait for kubelet
	I1201 20:05:25.373373   43679 kubeadm.go:587] duration metric: took 338.525872ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:05:25.373396   43679 node_conditions.go:102] verifying NodePressure condition ...
	I1201 20:05:25.379687   43679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1201 20:05:25.379710   43679 node_conditions.go:123] node cpu capacity is 2
	I1201 20:05:25.379720   43679 node_conditions.go:105] duration metric: took 6.318658ms to run NodePressure ...
	I1201 20:05:25.379733   43679 start.go:242] waiting for startup goroutines ...
	I1201 20:05:25.449589   43679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1201 20:05:25.454507   43679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1201 20:05:26.118341   43679 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1201 20:05:26.119886   43679 addons.go:530] duration metric: took 1.084978625s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1201 20:05:26.119926   43679 start.go:247] waiting for cluster config update ...
	I1201 20:05:26.119942   43679 start.go:256] writing updated cluster config ...
	I1201 20:05:26.120222   43679 ssh_runner.go:195] Run: rm -f paused
	I1201 20:05:26.125551   43679 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:05:26.126004   43679 kapi.go:59] client config for test-preload-245765: &rest.Config{Host:"https://192.168.39.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-12903/.minikube/profiles/test-preload-245765/client.key", CAFile:"/home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1201 20:05:26.129711   43679 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4k2vv" in "kube-system" namespace to be "Ready" or be gone ...
	W1201 20:05:28.138211   43679 pod_ready.go:104] pod "coredns-66bc5c9577-4k2vv" is not "Ready", error: <nil>
	W1201 20:05:30.635726   43679 pod_ready.go:104] pod "coredns-66bc5c9577-4k2vv" is not "Ready", error: <nil>
	W1201 20:05:32.635861   43679 pod_ready.go:104] pod "coredns-66bc5c9577-4k2vv" is not "Ready", error: <nil>
	W1201 20:05:34.637036   43679 pod_ready.go:104] pod "coredns-66bc5c9577-4k2vv" is not "Ready", error: <nil>
	I1201 20:05:36.135332   43679 pod_ready.go:94] pod "coredns-66bc5c9577-4k2vv" is "Ready"
	I1201 20:05:36.135371   43679 pod_ready.go:86] duration metric: took 10.005637326s for pod "coredns-66bc5c9577-4k2vv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:36.139165   43679 pod_ready.go:83] waiting for pod "etcd-test-preload-245765" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:36.145030   43679 pod_ready.go:94] pod "etcd-test-preload-245765" is "Ready"
	I1201 20:05:36.145055   43679 pod_ready.go:86] duration metric: took 5.862792ms for pod "etcd-test-preload-245765" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:36.147019   43679 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-245765" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:36.653103   43679 pod_ready.go:94] pod "kube-apiserver-test-preload-245765" is "Ready"
	I1201 20:05:36.653128   43679 pod_ready.go:86] duration metric: took 506.091106ms for pod "kube-apiserver-test-preload-245765" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:36.655301   43679 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-245765" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:36.733400   43679 pod_ready.go:94] pod "kube-controller-manager-test-preload-245765" is "Ready"
	I1201 20:05:36.733435   43679 pod_ready.go:86] duration metric: took 78.110764ms for pod "kube-controller-manager-test-preload-245765" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:36.933774   43679 pod_ready.go:83] waiting for pod "kube-proxy-rbqdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:37.333582   43679 pod_ready.go:94] pod "kube-proxy-rbqdv" is "Ready"
	I1201 20:05:37.333609   43679 pod_ready.go:86] duration metric: took 399.803076ms for pod "kube-proxy-rbqdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:37.534455   43679 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-245765" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:37.933616   43679 pod_ready.go:94] pod "kube-scheduler-test-preload-245765" is "Ready"
	I1201 20:05:37.933640   43679 pod_ready.go:86] duration metric: took 399.154456ms for pod "kube-scheduler-test-preload-245765" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:05:37.933652   43679 pod_ready.go:40] duration metric: took 11.808063261s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:05:37.977242   43679 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:05:37.979891   43679 out.go:179] * Done! kubectl is now configured to use "test-preload-245765" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.775129102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764619538775101068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d3cb5e9-7982-4b77-8f59-34759c0bc3c0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.776319995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64589dc4-4bc8-4b18-a6a2-f19793ec1128 name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.776372265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64589dc4-4bc8-4b18-a6a2-f19793ec1128 name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.776605185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaae27910162656bcbfe93d36737444e147d00f4aa803f196ff08c9e6e9589c,PodSandboxId:58ce51882e20bc67b8254377ca5f2216042a46a0f4a54a1498377efac76d1082,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764619527849245518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4k2vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2981768c-9197-48dd-a3e9-4e07027c8910,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417a188c3021fea0e4587f1c6341545e4694d280f3a1d06341dd27655325f2f4,PodSandboxId:56b10e97680cd6dc23b807a57f5b5dc42575f6e1aa230d27119ad476b33ec87b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764619524539436588,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55bec97f-a2a0-48f1-8328-6ecb349209b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a55f4306ea91aaf2550fe44ffa0cf8c5ca0b7f177d683b95665c982fe903700,PodSandboxId:86d978cdf9255f2dde68d5f3b30ecf8605e9bd0c0308234b48a84152eedc02d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764619524505103721,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbqdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db6f825-edf2-4b48-b7e9-da262f703712,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d048a7c46c548ce324f6a2120f50fea5a0b425f33ebca378b805fc386b163bac,PodSandboxId:c292ca336f845e078a7d88ef7ac71ad6d9b6dcfe96861fb67b49d63c7b3d5d7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764619519929630125,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869d3b91d446f5af04df3c2f5db500f5,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fddb9e671103fd967f638abcb260a9b581fad74466c7d4a3284e1952f222961,PodSandboxId:511d1ed6448f3625c0490655cca086496d624df1617e1cc999753916ef73abc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e
3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764619519922764218,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c88a23a1dd66d0653fe4d8fb9f4d035,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77e79420b5f4de4bc4a08b6e9173a087e552222830fd467dee131893215961,PodSandboxId:5316ef98aed8aa2d26ac3c6790f5c9dc16cebb1f9e14320e3938b7621f73597a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764619519884208271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ecb3808d5b2cb7e5ab17d2662f06031,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6241ba2173f70a534148a1f4c14b0da9b94566bdbd3983e0a20450bd1822457,PodSandboxId:98c731991d264e256d912b9b2a2b94080664876f7d585af70ba20c88f0813ba5,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764619519871667041,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65df565ad72dbc7fecae5de430224ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64589dc4-4bc8-4b18-a6a2-f19793ec1128 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.812217177Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd643d91-eca3-4083-84f1-98c5bb5572b4 name=/runtime.v1.RuntimeService/Version
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.812338329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd643d91-eca3-4083-84f1-98c5bb5572b4 name=/runtime.v1.RuntimeService/Version
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.813840285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdfd4620-b1b0-4ca3-90bf-26b305cd0db0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.814272847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764619538814247777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdfd4620-b1b0-4ca3-90bf-26b305cd0db0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.815269892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5720e1f8-86e3-4c72-a8c2-c2fc514eafc6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.815379969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5720e1f8-86e3-4c72-a8c2-c2fc514eafc6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.816188741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaae27910162656bcbfe93d36737444e147d00f4aa803f196ff08c9e6e9589c,PodSandboxId:58ce51882e20bc67b8254377ca5f2216042a46a0f4a54a1498377efac76d1082,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764619527849245518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4k2vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2981768c-9197-48dd-a3e9-4e07027c8910,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417a188c3021fea0e4587f1c6341545e4694d280f3a1d06341dd27655325f2f4,PodSandboxId:56b10e97680cd6dc23b807a57f5b5dc42575f6e1aa230d27119ad476b33ec87b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764619524539436588,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55bec97f-a2a0-48f1-8328-6ecb349209b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a55f4306ea91aaf2550fe44ffa0cf8c5ca0b7f177d683b95665c982fe903700,PodSandboxId:86d978cdf9255f2dde68d5f3b30ecf8605e9bd0c0308234b48a84152eedc02d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764619524505103721,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbqdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db6f825-edf2-4b48-b7e9-da262f703712,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d048a7c46c548ce324f6a2120f50fea5a0b425f33ebca378b805fc386b163bac,PodSandboxId:c292ca336f845e078a7d88ef7ac71ad6d9b6dcfe96861fb67b49d63c7b3d5d7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764619519929630125,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869d3b91d446f5af04df3c2f5db500f5,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fddb9e671103fd967f638abcb260a9b581fad74466c7d4a3284e1952f222961,PodSandboxId:511d1ed6448f3625c0490655cca086496d624df1617e1cc999753916ef73abc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e
3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764619519922764218,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c88a23a1dd66d0653fe4d8fb9f4d035,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77e79420b5f4de4bc4a08b6e9173a087e552222830fd467dee131893215961,PodSandboxId:5316ef98aed8aa2d26ac3c6790f5c9dc16cebb1f9e14320e3938b7621f73597a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764619519884208271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ecb3808d5b2cb7e5ab17d2662f06031,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6241ba2173f70a534148a1f4c14b0da9b94566bdbd3983e0a20450bd1822457,PodSandboxId:98c731991d264e256d912b9b2a2b94080664876f7d585af70ba20c88f0813ba5,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764619519871667041,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65df565ad72dbc7fecae5de430224ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5720e1f8-86e3-4c72-a8c2-c2fc514eafc6 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.851049691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a88f4769-9359-4327-a15b-3045f99fc031 name=/runtime.v1.RuntimeService/Version
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.851173802Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a88f4769-9359-4327-a15b-3045f99fc031 name=/runtime.v1.RuntimeService/Version
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.852827625Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f2e4e0b-1ece-4fce-8664-3b0971d2ed69 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.853242341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764619538853218657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f2e4e0b-1ece-4fce-8664-3b0971d2ed69 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.854287221Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d54c5baf-bdc0-4cff-b0e7-ff2badcf4635 name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.854360133Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d54c5baf-bdc0-4cff-b0e7-ff2badcf4635 name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.854614290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaae27910162656bcbfe93d36737444e147d00f4aa803f196ff08c9e6e9589c,PodSandboxId:58ce51882e20bc67b8254377ca5f2216042a46a0f4a54a1498377efac76d1082,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764619527849245518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4k2vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2981768c-9197-48dd-a3e9-4e07027c8910,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417a188c3021fea0e4587f1c6341545e4694d280f3a1d06341dd27655325f2f4,PodSandboxId:56b10e97680cd6dc23b807a57f5b5dc42575f6e1aa230d27119ad476b33ec87b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764619524539436588,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55bec97f-a2a0-48f1-8328-6ecb349209b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a55f4306ea91aaf2550fe44ffa0cf8c5ca0b7f177d683b95665c982fe903700,PodSandboxId:86d978cdf9255f2dde68d5f3b30ecf8605e9bd0c0308234b48a84152eedc02d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764619524505103721,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbqdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db6f825-edf2-4b48-b7e9-da262f703712,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d048a7c46c548ce324f6a2120f50fea5a0b425f33ebca378b805fc386b163bac,PodSandboxId:c292ca336f845e078a7d88ef7ac71ad6d9b6dcfe96861fb67b49d63c7b3d5d7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764619519929630125,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869d3b91d446f5af04df3c2f5db500f5,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fddb9e671103fd967f638abcb260a9b581fad74466c7d4a3284e1952f222961,PodSandboxId:511d1ed6448f3625c0490655cca086496d624df1617e1cc999753916ef73abc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e
3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764619519922764218,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c88a23a1dd66d0653fe4d8fb9f4d035,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77e79420b5f4de4bc4a08b6e9173a087e552222830fd467dee131893215961,PodSandboxId:5316ef98aed8aa2d26ac3c6790f5c9dc16cebb1f9e14320e3938b7621f73597a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764619519884208271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ecb3808d5b2cb7e5ab17d2662f06031,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6241ba2173f70a534148a1f4c14b0da9b94566bdbd3983e0a20450bd1822457,PodSandboxId:98c731991d264e256d912b9b2a2b94080664876f7d585af70ba20c88f0813ba5,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764619519871667041,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65df565ad72dbc7fecae5de430224ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d54c5baf-bdc0-4cff-b0e7-ff2badcf4635 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.883817507Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8dbc843b-4a90-4e42-855f-f0116428edeb name=/runtime.v1.RuntimeService/Version
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.883910349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8dbc843b-4a90-4e42-855f-f0116428edeb name=/runtime.v1.RuntimeService/Version
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.885028677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=685a1bcc-7099-4bfd-b372-6c1d1f142f38 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.885438099Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764619538885412720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=685a1bcc-7099-4bfd-b372-6c1d1f142f38 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.886490757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfdda2bf-1f34-4228-a8f7-cdec57b3a67c name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.886624445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfdda2bf-1f34-4228-a8f7-cdec57b3a67c name=/runtime.v1.RuntimeService/ListContainers
	Dec 01 20:05:38 test-preload-245765 crio[828]: time="2025-12-01 20:05:38.886834274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:adaae27910162656bcbfe93d36737444e147d00f4aa803f196ff08c9e6e9589c,PodSandboxId:58ce51882e20bc67b8254377ca5f2216042a46a0f4a54a1498377efac76d1082,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764619527849245518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4k2vv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2981768c-9197-48dd-a3e9-4e07027c8910,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:417a188c3021fea0e4587f1c6341545e4694d280f3a1d06341dd27655325f2f4,PodSandboxId:56b10e97680cd6dc23b807a57f5b5dc42575f6e1aa230d27119ad476b33ec87b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764619524539436588,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55bec97f-a2a0-48f1-8328-6ecb349209b6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a55f4306ea91aaf2550fe44ffa0cf8c5ca0b7f177d683b95665c982fe903700,PodSandboxId:86d978cdf9255f2dde68d5f3b30ecf8605e9bd0c0308234b48a84152eedc02d4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764619524505103721,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rbqdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5db6f825-edf2-4b48-b7e9-da262f703712,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d048a7c46c548ce324f6a2120f50fea5a0b425f33ebca378b805fc386b163bac,PodSandboxId:c292ca336f845e078a7d88ef7ac71ad6d9b6dcfe96861fb67b49d63c7b3d5d7d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764619519929630125,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 869d3b91d446f5af04df3c2f5db500f5,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fddb9e671103fd967f638abcb260a9b581fad74466c7d4a3284e1952f222961,PodSandboxId:511d1ed6448f3625c0490655cca086496d624df1617e1cc999753916ef73abc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e
3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764619519922764218,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c88a23a1dd66d0653fe4d8fb9f4d035,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d77e79420b5f4de4bc4a08b6e9173a087e552222830fd467dee131893215961,PodSandboxId:5316ef98aed8aa2d26ac3c6790f5c9dc16cebb1f9e14320e3938b7621f73597a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,A
nnotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764619519884208271,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ecb3808d5b2cb7e5ab17d2662f06031,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6241ba2173f70a534148a1f4c14b0da9b94566bdbd3983e0a20450bd1822457,PodSandboxId:98c731991d264e256d912b9b2a2b94080664876f7d585af70ba20c88f0813ba5,Metadata:&ContainerMetadata{Name:etcd,A
ttempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764619519871667041,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-245765,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65df565ad72dbc7fecae5de430224ce5,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfdda2bf-1f34-4228-a8f7-cdec57b3a67c name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	adaae27910162       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   1                   58ce51882e20b       coredns-66bc5c9577-4k2vv                      kube-system
	417a188c3021f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       2                   56b10e97680cd       storage-provisioner                           kube-system
	8a55f4306ea91       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   14 seconds ago      Running             kube-proxy                1                   86d978cdf9255       kube-proxy-rbqdv                              kube-system
	d048a7c46c548       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   19 seconds ago      Running             kube-scheduler            1                   c292ca336f845       kube-scheduler-test-preload-245765            kube-system
	4fddb9e671103       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   19 seconds ago      Running             kube-controller-manager   1                   511d1ed6448f3       kube-controller-manager-test-preload-245765   kube-system
	1d77e79420b5f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   19 seconds ago      Running             kube-apiserver            1                   5316ef98aed8a       kube-apiserver-test-preload-245765            kube-system
	a6241ba2173f7       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago      Running             etcd                      1                   98c731991d264       etcd-test-preload-245765                      kube-system
	
	
	==> coredns [adaae27910162656bcbfe93d36737444e147d00f4aa803f196ff08c9e6e9589c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42471 - 56616 "HINFO IN 8361758423585338319.8259121887543797303. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020409191s
	
	
	==> describe nodes <==
	Name:               test-preload-245765
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-245765
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ab9e66fb642a86710fef1e3147732f1580938c9
	                    minikube.k8s.io/name=test-preload-245765
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_01T20_04_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 01 Dec 2025 20:04:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-245765
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 01 Dec 2025 20:05:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 01 Dec 2025 20:05:24 +0000   Mon, 01 Dec 2025 20:03:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 01 Dec 2025 20:05:24 +0000   Mon, 01 Dec 2025 20:03:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 01 Dec 2025 20:05:24 +0000   Mon, 01 Dec 2025 20:03:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 01 Dec 2025 20:05:24 +0000   Mon, 01 Dec 2025 20:05:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    test-preload-245765
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 df84803b5ed5482c86c54cec4f0e7b13
	  System UUID:                df84803b-5ed5-482c-86c5-4cec4f0e7b13
	  Boot ID:                    bd87602b-f5f1-4ef5-a1f7-0d11e2d0b982
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02.8
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4k2vv                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     90s
	  kube-system                 etcd-test-preload-245765                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         96s
	  kube-system                 kube-apiserver-test-preload-245765             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-test-preload-245765    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-rbqdv                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-test-preload-245765             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 88s                  kube-proxy       
	  Normal   Starting                 14s                  kube-proxy       
	  Normal   Starting                 102s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node test-preload-245765 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node test-preload-245765 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node test-preload-245765 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 96s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    95s                  kubelet          Node test-preload-245765 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s                  kubelet          Node test-preload-245765 status is now: NodeHasSufficientPID
	  Normal   NodeReady                95s                  kubelet          Node test-preload-245765 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  95s                  kubelet          Node test-preload-245765 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           91s                  node-controller  Node test-preload-245765 event: Registered Node test-preload-245765 in Controller
	  Normal   Starting                 21s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-245765 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-245765 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-245765 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                  kubelet          Node test-preload-245765 has been rebooted, boot id: bd87602b-f5f1-4ef5-a1f7-0d11e2d0b982
	  Normal   RegisteredNode           13s                  node-controller  Node test-preload-245765 event: Registered Node test-preload-245765 in Controller
	
	
	==> dmesg <==
	[Dec 1 20:04] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec 1 20:05] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001625] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.022702] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +3.566544] kauditd_printk_skb: 88 callbacks suppressed
	[  +8.806394] kauditd_printk_skb: 196 callbacks suppressed
	[  +7.978506] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [a6241ba2173f70a534148a1f4c14b0da9b94566bdbd3983e0a20450bd1822457] <==
	{"level":"warn","ts":"2025-12-01T20:05:22.041203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.057645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.073307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.101859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.129736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.144959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.182788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.196664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.210700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.225574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.258661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.287622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.322343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.335612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.354290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.371975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.382052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.395490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.412208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.431231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.439331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.480720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.493228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.508944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-01T20:05:22.594365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36150","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:05:39 up 0 min,  0 users,  load average: 0.97, 0.25, 0.08
	Linux test-preload-245765 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  1 18:07:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02.8"
	
	
	==> kube-apiserver [1d77e79420b5f4de4bc4a08b6e9173a087e552222830fd467dee131893215961] <==
	I1201 20:05:23.341832       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1201 20:05:23.342455       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1201 20:05:23.342835       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1201 20:05:23.347758       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1201 20:05:23.349955       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1201 20:05:23.353594       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1201 20:05:23.353747       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1201 20:05:23.353927       1 aggregator.go:171] initial CRD sync complete...
	I1201 20:05:23.353952       1 autoregister_controller.go:144] Starting autoregister controller
	I1201 20:05:23.353968       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1201 20:05:23.353984       1 cache.go:39] Caches are synced for autoregister controller
	I1201 20:05:23.354404       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1201 20:05:23.372606       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1201 20:05:23.378023       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1201 20:05:23.378070       1 policy_source.go:240] refreshing policies
	I1201 20:05:23.428332       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1201 20:05:24.063672       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1201 20:05:24.129158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1201 20:05:24.846472       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1201 20:05:24.896969       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1201 20:05:24.935379       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1201 20:05:24.942759       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1201 20:05:26.687822       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1201 20:05:26.928777       1 controller.go:667] quota admission added evaluator for: endpoints
	I1201 20:05:27.031030       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4fddb9e671103fd967f638abcb260a9b581fad74466c7d4a3284e1952f222961] <==
	I1201 20:05:26.648341       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1201 20:05:26.648371       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1201 20:05:26.648375       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1201 20:05:26.648380       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1201 20:05:26.653844       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:05:26.659247       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:05:26.659365       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1201 20:05:26.659389       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1201 20:05:26.672033       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1201 20:05:26.675057       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1201 20:05:26.675139       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1201 20:05:26.675179       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1201 20:05:26.675269       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-245765"
	I1201 20:05:26.675329       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1201 20:05:26.675394       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1201 20:05:26.675463       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1201 20:05:26.675690       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1201 20:05:26.675931       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1201 20:05:26.676402       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1201 20:05:26.676719       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1201 20:05:26.676786       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1201 20:05:26.676805       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1201 20:05:26.676908       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1201 20:05:26.677242       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1201 20:05:26.677793       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [8a55f4306ea91aaf2550fe44ffa0cf8c5ca0b7f177d683b95665c982fe903700] <==
	I1201 20:05:24.830173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1201 20:05:24.931403       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1201 20:05:24.931453       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.215"]
	E1201 20:05:24.931524       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1201 20:05:24.976881       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1201 20:05:24.976953       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1201 20:05:24.976982       1 server_linux.go:132] "Using iptables Proxier"
	I1201 20:05:24.987011       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1201 20:05:24.987489       1 server.go:527] "Version info" version="v1.34.2"
	I1201 20:05:24.987625       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:05:24.994217       1 config.go:200] "Starting service config controller"
	I1201 20:05:24.994232       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1201 20:05:24.994296       1 config.go:106] "Starting endpoint slice config controller"
	I1201 20:05:24.994305       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1201 20:05:24.994323       1 config.go:403] "Starting serviceCIDR config controller"
	I1201 20:05:24.994326       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1201 20:05:25.002161       1 config.go:309] "Starting node config controller"
	I1201 20:05:25.002188       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1201 20:05:25.002195       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1201 20:05:25.095034       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1201 20:05:25.095109       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1201 20:05:25.095138       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d048a7c46c548ce324f6a2120f50fea5a0b425f33ebca378b805fc386b163bac] <==
	I1201 20:05:23.282625       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1201 20:05:23.287612       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:05:23.287669       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1201 20:05:23.289937       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1201 20:05:23.290021       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1201 20:05:23.297018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1201 20:05:23.318689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1201 20:05:23.318698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1201 20:05:23.318792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1201 20:05:23.318873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1201 20:05:23.318945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1201 20:05:23.319020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1201 20:05:23.319085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1201 20:05:23.319154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1201 20:05:23.319220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1201 20:05:23.319277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1201 20:05:23.319354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1201 20:05:23.319419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1201 20:05:23.319488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1201 20:05:23.319612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1201 20:05:23.319697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1201 20:05:23.319801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1201 20:05:23.319877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1201 20:05:23.321623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1201 20:05:24.388685       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 01 20:05:23 test-preload-245765 kubelet[1163]: E1201 20:05:23.487612    1163 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-245765\" already exists" pod="kube-system/kube-scheduler-test-preload-245765"
	Dec 01 20:05:23 test-preload-245765 kubelet[1163]: I1201 20:05:23.487657    1163 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-245765"
	Dec 01 20:05:23 test-preload-245765 kubelet[1163]: E1201 20:05:23.498706    1163 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-245765\" already exists" pod="kube-system/etcd-test-preload-245765"
	Dec 01 20:05:23 test-preload-245765 kubelet[1163]: I1201 20:05:23.498742    1163 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-245765"
	Dec 01 20:05:23 test-preload-245765 kubelet[1163]: E1201 20:05:23.509422    1163 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-245765\" already exists" pod="kube-system/kube-apiserver-test-preload-245765"
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: I1201 20:05:24.020177    1163 apiserver.go:52] "Watching apiserver"
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: E1201 20:05:24.024645    1163 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-4k2vv" podUID="2981768c-9197-48dd-a3e9-4e07027c8910"
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: I1201 20:05:24.049812    1163 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: I1201 20:05:24.053933    1163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/55bec97f-a2a0-48f1-8328-6ecb349209b6-tmp\") pod \"storage-provisioner\" (UID: \"55bec97f-a2a0-48f1-8328-6ecb349209b6\") " pod="kube-system/storage-provisioner"
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: I1201 20:05:24.053989    1163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5db6f825-edf2-4b48-b7e9-da262f703712-xtables-lock\") pod \"kube-proxy-rbqdv\" (UID: \"5db6f825-edf2-4b48-b7e9-da262f703712\") " pod="kube-system/kube-proxy-rbqdv"
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: I1201 20:05:24.054008    1163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5db6f825-edf2-4b48-b7e9-da262f703712-lib-modules\") pod \"kube-proxy-rbqdv\" (UID: \"5db6f825-edf2-4b48-b7e9-da262f703712\") " pod="kube-system/kube-proxy-rbqdv"
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: E1201 20:05:24.054209    1163 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: E1201 20:05:24.054277    1163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2981768c-9197-48dd-a3e9-4e07027c8910-config-volume podName:2981768c-9197-48dd-a3e9-4e07027c8910 nodeName:}" failed. No retries permitted until 2025-12-01 20:05:24.554258911 +0000 UTC m=+6.629911762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2981768c-9197-48dd-a3e9-4e07027c8910-config-volume") pod "coredns-66bc5c9577-4k2vv" (UID: "2981768c-9197-48dd-a3e9-4e07027c8910") : object "kube-system"/"coredns" not registered
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: I1201 20:05:24.188633    1163 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-245765"
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: E1201 20:05:24.199922    1163 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-245765\" already exists" pod="kube-system/kube-scheduler-test-preload-245765"
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: E1201 20:05:24.558233    1163 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: E1201 20:05:24.558660    1163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2981768c-9197-48dd-a3e9-4e07027c8910-config-volume podName:2981768c-9197-48dd-a3e9-4e07027c8910 nodeName:}" failed. No retries permitted until 2025-12-01 20:05:25.558641443 +0000 UTC m=+7.634294268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2981768c-9197-48dd-a3e9-4e07027c8910-config-volume") pod "coredns-66bc5c9577-4k2vv" (UID: "2981768c-9197-48dd-a3e9-4e07027c8910") : object "kube-system"/"coredns" not registered
	Dec 01 20:05:24 test-preload-245765 kubelet[1163]: I1201 20:05:24.775823    1163 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 01 20:05:25 test-preload-245765 kubelet[1163]: E1201 20:05:25.570579    1163 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 01 20:05:25 test-preload-245765 kubelet[1163]: E1201 20:05:25.570673    1163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2981768c-9197-48dd-a3e9-4e07027c8910-config-volume podName:2981768c-9197-48dd-a3e9-4e07027c8910 nodeName:}" failed. No retries permitted until 2025-12-01 20:05:27.570657573 +0000 UTC m=+9.646310399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2981768c-9197-48dd-a3e9-4e07027c8910-config-volume") pod "coredns-66bc5c9577-4k2vv" (UID: "2981768c-9197-48dd-a3e9-4e07027c8910") : object "kube-system"/"coredns" not registered
	Dec 01 20:05:28 test-preload-245765 kubelet[1163]: E1201 20:05:28.111946    1163 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764619528110701130 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 01 20:05:28 test-preload-245765 kubelet[1163]: E1201 20:05:28.111989    1163 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764619528110701130 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 01 20:05:35 test-preload-245765 kubelet[1163]: I1201 20:05:35.632803    1163 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 01 20:05:38 test-preload-245765 kubelet[1163]: E1201 20:05:38.116412    1163 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764619538116199775 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 01 20:05:38 test-preload-245765 kubelet[1163]: E1201 20:05:38.116433    1163 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764619538116199775 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [417a188c3021fea0e4587f1c6341545e4694d280f3a1d06341dd27655325f2f4] <==
	I1201 20:05:24.677223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-245765 -n test-preload-245765
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-245765 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-245765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-245765
--- FAIL: TestPreload (145.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (66.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-092823 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-092823 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.511231855s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-092823] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-092823" primary control-plane node in "pause-092823" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-092823" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:14:33.689937   52760 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:14:33.690214   52760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:14:33.690224   52760 out.go:374] Setting ErrFile to fd 2...
	I1201 20:14:33.690229   52760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:14:33.690482   52760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 20:14:33.691017   52760 out.go:368] Setting JSON to false
	I1201 20:14:33.691971   52760 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7017,"bootTime":1764613057,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:14:33.692034   52760 start.go:143] virtualization: kvm guest
	I1201 20:14:33.790718   52760 out.go:179] * [pause-092823] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:14:33.813702   52760 notify.go:221] Checking for updates...
	I1201 20:14:33.813735   52760 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:14:33.876898   52760 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:14:33.878306   52760 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 20:14:33.880193   52760 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 20:14:33.881543   52760 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:14:33.882883   52760 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:14:33.884854   52760 config.go:182] Loaded profile config "pause-092823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:33.885512   52760 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:14:33.925441   52760 out.go:179] * Using the kvm2 driver based on existing profile
	I1201 20:14:33.926913   52760 start.go:309] selected driver: kvm2
	I1201 20:14:33.926930   52760 start.go:927] validating driver "kvm2" against &{Name:pause-092823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.165 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:14:33.927143   52760 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:14:33.928496   52760 cni.go:84] Creating CNI manager for ""
	I1201 20:14:33.928568   52760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:14:33.928635   52760 start.go:353] cluster config:
	{Name:pause-092823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-092823 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.165 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:14:33.928803   52760 iso.go:125] acquiring lock: {Name:mk6a50ce57553a723db22dad35f70cd00228e9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:33.931482   52760 out.go:179] * Starting "pause-092823" primary control-plane node in "pause-092823" cluster
	I1201 20:14:33.932698   52760 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:14:33.932737   52760 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 20:14:33.932747   52760 cache.go:65] Caching tarball of preloaded images
	I1201 20:14:33.932866   52760 preload.go:238] Found /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1201 20:14:33.932884   52760 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1201 20:14:33.933074   52760 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/config.json ...
	I1201 20:14:33.933379   52760 start.go:360] acquireMachinesLock for pause-092823: {Name:mka5785482004af70e425c1e38474157ff061d66 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 20:14:46.102145   52760 start.go:364] duration metric: took 12.168731127s to acquireMachinesLock for "pause-092823"
	I1201 20:14:46.102209   52760 start.go:96] Skipping create...Using existing machine configuration
	I1201 20:14:46.102217   52760 fix.go:54] fixHost starting: 
	I1201 20:14:46.104899   52760 fix.go:112] recreateIfNeeded on pause-092823: state=Running err=<nil>
	W1201 20:14:46.104939   52760 fix.go:138] unexpected machine state, will restart: <nil>
	I1201 20:14:46.109857   52760 out.go:252] * Updating the running kvm2 "pause-092823" VM ...
	I1201 20:14:46.110055   52760 machine.go:94] provisionDockerMachine start ...
	I1201 20:14:46.113317   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.113741   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.113793   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.114012   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.114202   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.114212   52760 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:14:46.229463   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092823
	
	I1201 20:14:46.229492   52760 buildroot.go:166] provisioning hostname "pause-092823"
	I1201 20:14:46.232733   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.233167   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.233203   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.233419   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.233696   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.233710   52760 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-092823 && echo "pause-092823" | sudo tee /etc/hostname
	I1201 20:14:46.367930   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092823
	
	I1201 20:14:46.371177   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.371647   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.371687   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.371880   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.372115   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.372133   52760 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-092823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-092823/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-092823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:14:46.479675   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:14:46.479706   52760 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12903/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12903/.minikube}
	I1201 20:14:46.479731   52760 buildroot.go:174] setting up certificates
	I1201 20:14:46.479747   52760 provision.go:84] configureAuth start
	I1201 20:14:46.484077   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.484730   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.484756   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.487821   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.488406   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.488430   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.488632   52760 provision.go:143] copyHostCerts
	I1201 20:14:46.488690   52760 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem, removing ...
	I1201 20:14:46.488704   52760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem
	I1201 20:14:46.488770   52760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem (1078 bytes)
	I1201 20:14:46.488905   52760 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem, removing ...
	I1201 20:14:46.488916   52760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem
	I1201 20:14:46.488942   52760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem (1123 bytes)
	I1201 20:14:46.488997   52760 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem, removing ...
	I1201 20:14:46.489004   52760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem
	I1201 20:14:46.489029   52760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem (1675 bytes)
	I1201 20:14:46.489082   52760 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem org=jenkins.pause-092823 san=[127.0.0.1 192.168.83.165 localhost minikube pause-092823]
	I1201 20:14:46.686710   52760 provision.go:177] copyRemoteCerts
	I1201 20:14:46.686768   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:14:46.690170   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.690641   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.690677   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.690864   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:46.779314   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1201 20:14:46.819937   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 20:14:46.860475   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:14:46.894812   52760 provision.go:87] duration metric: took 415.050462ms to configureAuth
	I1201 20:14:46.894865   52760 buildroot.go:189] setting minikube options for container-runtime
	I1201 20:14:46.895130   52760 config.go:182] Loaded profile config "pause-092823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:46.898900   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.899407   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.899438   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.899741   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.900056   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.900086   52760 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:14:52.488148   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:14:52.488175   52760 machine.go:97] duration metric: took 6.37809979s to provisionDockerMachine
	I1201 20:14:52.488187   52760 start.go:293] postStartSetup for "pause-092823" (driver="kvm2")
	I1201 20:14:52.488195   52760 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:14:52.488261   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:14:52.491556   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.492162   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.492208   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.492403   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:52.576626   52760 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:14:52.581811   52760 info.go:137] Remote host: Buildroot 2025.02.8
	I1201 20:14:52.581846   52760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/addons for local assets ...
	I1201 20:14:52.581909   52760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/files for local assets ...
	I1201 20:14:52.582032   52760 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem -> 168682.pem in /etc/ssl/certs
	I1201 20:14:52.582295   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:14:52.594982   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:52.625436   52760 start.go:296] duration metric: took 137.237635ms for postStartSetup
	I1201 20:14:52.625470   52760 fix.go:56] duration metric: took 6.523253983s for fixHost
	I1201 20:14:52.628161   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.628602   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.628625   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.628891   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:52.629082   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:52.629092   52760 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1201 20:14:52.736550   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764620092.731694487
	
	I1201 20:14:52.736575   52760 fix.go:216] guest clock: 1764620092.731694487
	I1201 20:14:52.736585   52760 fix.go:229] Guest: 2025-12-01 20:14:52.731694487 +0000 UTC Remote: 2025-12-01 20:14:52.625474204 +0000 UTC m=+18.993217111 (delta=106.220283ms)
	I1201 20:14:52.736604   52760 fix.go:200] guest clock delta is within tolerance: 106.220283ms
	I1201 20:14:52.736610   52760 start.go:83] releasing machines lock for "pause-092823", held for 6.634421762s
	I1201 20:14:52.740375   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.740960   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.740994   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.741631   52760 ssh_runner.go:195] Run: cat /version.json
	I1201 20:14:52.741783   52760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:14:52.745853   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.745875   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.746286   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.746314   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.746289   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.746401   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.746529   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:52.746712   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:52.824741   52760 ssh_runner.go:195] Run: systemctl --version
	I1201 20:14:52.864010   52760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:14:53.029292   52760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:14:53.039060   52760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:14:53.039157   52760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:14:53.051696   52760 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:14:53.051726   52760 start.go:496] detecting cgroup driver to use...
	I1201 20:14:53.051817   52760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:14:53.084736   52760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:14:53.105317   52760 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:14:53.105382   52760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:14:53.127096   52760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:14:53.146720   52760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:14:53.342461   52760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:14:53.535204   52760 docker.go:234] disabling docker service ...
	I1201 20:14:53.535269   52760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:14:53.570923   52760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:14:53.590115   52760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:14:53.786626   52760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:14:53.946106   52760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:14:53.963354   52760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:14:53.989355   52760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:14:53.989416   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.002464   52760 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:14:54.002540   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.016206   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.029704   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.043723   52760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:14:54.057882   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.070957   52760 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.087846   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.101197   52760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:14:54.112550   52760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:14:54.124663   52760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:54.311031   52760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:14:59.173899   52760 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.862827849s)
	I1201 20:14:59.173940   52760 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:14:59.174012   52760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:14:59.179694   52760 start.go:564] Will wait 60s for crictl version
	I1201 20:14:59.179756   52760 ssh_runner.go:195] Run: which crictl
	I1201 20:14:59.184414   52760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1201 20:14:59.230118   52760 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1201 20:14:59.230230   52760 ssh_runner.go:195] Run: crio --version
	I1201 20:14:59.263521   52760 ssh_runner.go:195] Run: crio --version
	I1201 20:14:59.300977   52760 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1201 20:14:59.306339   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:59.306927   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:59.306959   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:59.307206   52760 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1201 20:14:59.313485   52760 kubeadm.go:884] updating cluster {Name:pause-092823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.165 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:14:59.313668   52760 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:14:59.313743   52760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:59.361359   52760 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:14:59.361384   52760 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:14:59.361439   52760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:59.400162   52760 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:14:59.400196   52760 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:14:59.400206   52760 kubeadm.go:935] updating node { 192.168.83.165 8443 v1.34.2 crio true true} ...
	I1201 20:14:59.400427   52760 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-092823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:14:59.400636   52760 ssh_runner.go:195] Run: crio config
	I1201 20:14:59.465906   52760 cni.go:84] Creating CNI manager for ""
	I1201 20:14:59.465932   52760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:14:59.465953   52760 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:14:59.465980   52760 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.165 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-092823 NodeName:pause-092823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:14:59.466141   52760 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-092823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.165"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.165"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:14:59.466219   52760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:14:59.483088   52760 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:14:59.483155   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:14:59.498563   52760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1201 20:14:59.524693   52760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:14:59.553688   52760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1201 20:14:59.581709   52760 ssh_runner.go:195] Run: grep 192.168.83.165	control-plane.minikube.internal$ /etc/hosts
	I1201 20:14:59.586499   52760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:59.815049   52760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:14:59.839071   52760 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823 for IP: 192.168.83.165
	I1201 20:14:59.839094   52760 certs.go:195] generating shared ca certs ...
	I1201 20:14:59.839113   52760 certs.go:227] acquiring lock for ca certs: {Name:mk7e1ff47c53decb016970932c61ce60ac92f0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:59.839291   52760 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key
	I1201 20:14:59.839352   52760 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key
	I1201 20:14:59.839363   52760 certs.go:257] generating profile certs ...
	I1201 20:14:59.839525   52760 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/client.key
	I1201 20:14:59.839599   52760 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/apiserver.key.467a48e8
	I1201 20:14:59.839653   52760 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/proxy-client.key
	I1201 20:14:59.839841   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem (1338 bytes)
	W1201 20:14:59.839889   52760 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868_empty.pem, impossibly tiny 0 bytes
	I1201 20:14:59.839907   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:14:59.839940   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem (1078 bytes)
	I1201 20:14:59.839972   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:14:59.840008   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem (1675 bytes)
	I1201 20:14:59.840079   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:59.840980   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:14:59.877975   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:14:59.920143   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:14:59.958913   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:14:59.998579   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 20:15:00.040718   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:15:00.079238   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:15:00.115945   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:15:00.156664   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem --> /usr/share/ca-certificates/16868.pem (1338 bytes)
	I1201 20:15:00.254482   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /usr/share/ca-certificates/168682.pem (1708 bytes)
	I1201 20:15:00.314684   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:15:00.423437   52760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:15:00.526481   52760 ssh_runner.go:195] Run: openssl version
	I1201 20:15:00.562753   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16868.pem && ln -fs /usr/share/ca-certificates/16868.pem /etc/ssl/certs/16868.pem"
	I1201 20:15:00.613901   52760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16868.pem
	I1201 20:15:00.633573   52760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:16 /usr/share/ca-certificates/16868.pem
	I1201 20:15:00.633642   52760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16868.pem
	I1201 20:15:00.654970   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16868.pem /etc/ssl/certs/51391683.0"
	I1201 20:15:00.678714   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168682.pem && ln -fs /usr/share/ca-certificates/168682.pem /etc/ssl/certs/168682.pem"
	I1201 20:15:00.707854   52760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168682.pem
	I1201 20:15:00.718201   52760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:16 /usr/share/ca-certificates/168682.pem
	I1201 20:15:00.718272   52760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168682.pem
	I1201 20:15:00.734107   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168682.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:15:00.765014   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:15:00.802005   52760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:15:00.814660   52760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:15:00.814737   52760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:15:00.828225   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:15:00.851528   52760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:15:00.862966   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:15:00.879649   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:15:00.896138   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:15:00.912901   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:15:00.931391   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:15:00.949981   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:15:00.974116   52760 kubeadm.go:401] StartCluster: {Name:pause-092823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.165 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:15:00.974237   52760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:15:00.974317   52760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:15:01.099401   52760 cri.go:89] found id: "b06d0a76963ef52a5089e4fd9f7322e4c132cc7cef44a0a08aa8f5cdb5a85721"
	I1201 20:15:01.099424   52760 cri.go:89] found id: "57ec1c4b4eeef1781fa5e1f415b57c209059e745e9ed9ebd9f95ed9977eb2e49"
	I1201 20:15:01.099430   52760 cri.go:89] found id: "c903efca5f1e25a2e14a9d7025e07e8179e959d210c875262c49b8986bddc200"
	I1201 20:15:01.099435   52760 cri.go:89] found id: "cc26867cc3d6b7f6b97323de740dcc9dcc89282ab532326b58cba0dd488bb014"
	I1201 20:15:01.099439   52760 cri.go:89] found id: "163dc1a002a3236d0a9e0a45f1ad098210f847d393a710d6b68d41c70b87fc74"
	I1201 20:15:01.099443   52760 cri.go:89] found id: "2fa21d457805dc989eb2a8ef14f2955ec09a508de2674c5bc92ce8b6542a5051"
	I1201 20:15:01.099449   52760 cri.go:89] found id: "03cfd9a73d71461c21c0c9c0c15a1ee0ccc6a97d33909a53fa38e88ce0b2deae"
	I1201 20:15:01.099454   52760 cri.go:89] found id: ""
	I1201 20:15:01.099504   52760 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-092823 -n pause-092823
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-092823 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-092823 logs -n 25: (1.35952778s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-539916                                                                                                                                                                                               │ old-k8s-version-539916       │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ start   │ -p stopped-upgrade-921033 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-921033       │ jenkins │ v1.35.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ start   │ -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-903802    │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                           │ kubernetes-upgrade-903802    │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ stop    │ stopped-upgrade-921033 stop                                                                                                                                                                                             │ stopped-upgrade-921033       │ jenkins │ v1.35.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ start   │ -p stopped-upgrade-921033 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-921033       │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:13 UTC │
	│ delete  │ -p kubernetes-upgrade-903802                                                                                                                                                                                            │ kubernetes-upgrade-903802    │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ delete  │ -p disable-driver-mounts-893069                                                                                                                                                                                         │ disable-driver-mounts-893069 │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ start   │ -p pause-092823 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-092823                 │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:14 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-921033 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-921033       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │                     │
	│ delete  │ -p stopped-upgrade-921033                                                                                                                                                                                               │ stopped-upgrade-921033       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:13 UTC │
	│ start   │ -p embed-certs-200621 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2                                                                    │ embed-certs-200621           │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:15 UTC │
	│ start   │ -p cert-expiration-769037 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                 │ cert-expiration-769037       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:14 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-399758 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-399758       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │                     │
	│ delete  │ -p running-upgrade-399758                                                                                                                                                                                               │ running-upgrade-399758       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:14 UTC │
	│ start   │ -p cert-options-495506 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-495506          │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ delete  │ -p cert-expiration-769037                                                                                                                                                                                               │ cert-expiration-769037       │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ start   │ -p default-k8s-diff-port-240409 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2                                                  │ default-k8s-diff-port-240409 │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │                     │
	│ start   │ -p pause-092823 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-092823                 │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:15 UTC │
	│ ssh     │ cert-options-495506 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-495506          │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ ssh     │ -p cert-options-495506 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-495506          │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ delete  │ -p cert-options-495506                                                                                                                                                                                                  │ cert-options-495506          │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ start   │ -p no-preload-931553 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                            │ no-preload-931553            │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-200621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                │ embed-certs-200621           │ jenkins │ v1.37.0 │ 01 Dec 25 20:15 UTC │ 01 Dec 25 20:15 UTC │
	│ stop    │ -p embed-certs-200621 --alsologtostderr -v=3                                                                                                                                                                            │ embed-certs-200621           │ jenkins │ v1.37.0 │ 01 Dec 25 20:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:14:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:14:47.183527   52992 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:14:47.183745   52992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:14:47.183754   52992 out.go:374] Setting ErrFile to fd 2...
	I1201 20:14:47.183758   52992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:14:47.184009   52992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 20:14:47.184471   52992 out.go:368] Setting JSON to false
	I1201 20:14:47.185355   52992 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7030,"bootTime":1764613057,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:14:47.185409   52992 start.go:143] virtualization: kvm guest
	I1201 20:14:47.187377   52992 out.go:179] * [no-preload-931553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:14:47.188646   52992 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:14:47.188657   52992 notify.go:221] Checking for updates...
	I1201 20:14:47.191186   52992 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:14:47.192560   52992 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 20:14:47.193868   52992 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 20:14:47.194991   52992 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:14:47.196291   52992 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:14:47.198302   52992 config.go:182] Loaded profile config "default-k8s-diff-port-240409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:47.198457   52992 config.go:182] Loaded profile config "embed-certs-200621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:47.198576   52992 config.go:182] Loaded profile config "guest-790070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1201 20:14:47.198779   52992 config.go:182] Loaded profile config "pause-092823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:47.198939   52992 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:14:47.236736   52992 out.go:179] * Using the kvm2 driver based on user configuration
	I1201 20:14:47.237917   52992 start.go:309] selected driver: kvm2
	I1201 20:14:47.237941   52992 start.go:927] validating driver "kvm2" against <nil>
	I1201 20:14:47.237965   52992 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:14:47.238685   52992 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 20:14:47.238999   52992 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:14:47.239030   52992 cni.go:84] Creating CNI manager for ""
	I1201 20:14:47.239074   52992 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:14:47.239083   52992 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 20:14:47.239153   52992 start.go:353] cluster config:
	{Name:no-preload-931553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-931553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:14:47.239302   52992 iso.go:125] acquiring lock: {Name:mk6a50ce57553a723db22dad35f70cd00228e9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.240953   52992 out.go:179] * Starting "no-preload-931553" primary control-plane node in "no-preload-931553" cluster
	I1201 20:14:45.167420   52634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:14:45.167454   52634 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12903/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12903/.minikube}
	I1201 20:14:45.167481   52634 buildroot.go:174] setting up certificates
	I1201 20:14:45.167495   52634 provision.go:84] configureAuth start
	I1201 20:14:45.170546   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.171136   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.171163   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.174074   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.174542   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.174568   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.174763   52634 provision.go:143] copyHostCerts
	I1201 20:14:45.174871   52634 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem, removing ...
	I1201 20:14:45.174888   52634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem
	I1201 20:14:45.175462   52634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem (1078 bytes)
	I1201 20:14:45.175567   52634 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem, removing ...
	I1201 20:14:45.175576   52634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem
	I1201 20:14:45.175608   52634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem (1123 bytes)
	I1201 20:14:45.175696   52634 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem, removing ...
	I1201 20:14:45.175706   52634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem
	I1201 20:14:45.175730   52634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem (1675 bytes)
	I1201 20:14:45.175779   52634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-240409 san=[127.0.0.1 192.168.61.174 default-k8s-diff-port-240409 localhost minikube]
	I1201 20:14:45.303247   52634 provision.go:177] copyRemoteCerts
	I1201 20:14:45.303300   52634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:14:45.306523   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.307034   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.307073   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.307239   52634 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/default-k8s-diff-port-240409/id_rsa Username:docker}
	I1201 20:14:45.403134   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1201 20:14:45.441176   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1201 20:14:45.484059   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:14:45.516999   52634 provision.go:87] duration metric: took 349.489118ms to configureAuth
	I1201 20:14:45.517040   52634 buildroot.go:189] setting minikube options for container-runtime
	I1201 20:14:45.517313   52634 config.go:182] Loaded profile config "default-k8s-diff-port-240409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:45.520969   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.521607   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.521650   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.521894   52634 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:45.522168   52634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1201 20:14:45.522211   52634 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:14:45.817359   52634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:14:45.817394   52634 machine.go:97] duration metric: took 1.068512033s to provisionDockerMachine
	I1201 20:14:45.817406   52634 client.go:176] duration metric: took 20.135801226s to LocalClient.Create
	I1201 20:14:45.817421   52634 start.go:167] duration metric: took 20.135870719s to libmachine.API.Create "default-k8s-diff-port-240409"
	I1201 20:14:45.817431   52634 start.go:293] postStartSetup for "default-k8s-diff-port-240409" (driver="kvm2")
	I1201 20:14:45.817443   52634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:14:45.817519   52634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:14:45.821191   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.821768   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.821807   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.822038   52634 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/default-k8s-diff-port-240409/id_rsa Username:docker}
	I1201 20:14:45.917327   52634 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:14:45.922592   52634 info.go:137] Remote host: Buildroot 2025.02.8
	I1201 20:14:45.922620   52634 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/addons for local assets ...
	I1201 20:14:45.922722   52634 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/files for local assets ...
	I1201 20:14:45.922888   52634 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem -> 168682.pem in /etc/ssl/certs
	I1201 20:14:45.923041   52634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:14:45.935359   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:45.971132   52634 start.go:296] duration metric: took 153.685278ms for postStartSetup
	I1201 20:14:45.974892   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.975596   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.975640   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.976018   52634 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/config.json ...
	I1201 20:14:45.976317   52634 start.go:128] duration metric: took 20.297099778s to createHost
	I1201 20:14:45.979316   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.979884   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.979922   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.980143   52634 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:45.980410   52634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1201 20:14:45.980422   52634 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1201 20:14:46.101999   52634 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764620086.053227742
	
	I1201 20:14:46.102026   52634 fix.go:216] guest clock: 1764620086.053227742
	I1201 20:14:46.102036   52634 fix.go:229] Guest: 2025-12-01 20:14:46.053227742 +0000 UTC Remote: 2025-12-01 20:14:45.976335891 +0000 UTC m=+20.928515406 (delta=76.891851ms)
	I1201 20:14:46.102057   52634 fix.go:200] guest clock delta is within tolerance: 76.891851ms
	I1201 20:14:46.102065   52634 start.go:83] releasing machines lock for "default-k8s-diff-port-240409", held for 20.423107523s
	I1201 20:14:46.105969   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.106390   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:46.106417   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.106984   52634 ssh_runner.go:195] Run: cat /version.json
	I1201 20:14:46.107080   52634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:14:46.111016   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.111379   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.111503   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:46.111570   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.111843   52634 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/default-k8s-diff-port-240409/id_rsa Username:docker}
	I1201 20:14:46.112197   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:46.112230   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.112414   52634 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/default-k8s-diff-port-240409/id_rsa Username:docker}
	I1201 20:14:46.196040   52634 ssh_runner.go:195] Run: systemctl --version
	I1201 20:14:46.237058   52634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:14:46.423020   52634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:14:46.430998   52634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:14:46.431066   52634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:14:46.454197   52634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1201 20:14:46.454231   52634 start.go:496] detecting cgroup driver to use...
	I1201 20:14:46.454306   52634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:14:46.473074   52634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:14:46.495781   52634 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:14:46.495855   52634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:14:46.516376   52634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:14:46.540083   52634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:14:46.713375   52634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:14:46.937030   52634 docker.go:234] disabling docker service ...
	I1201 20:14:46.937104   52634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:14:46.956633   52634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:14:46.973154   52634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:14:47.157385   52634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:14:47.312983   52634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:14:47.334236   52634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:14:47.356621   52634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:14:47.356692   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.369199   52634 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:14:47.369268   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.381552   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.393491   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.406008   52634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:14:47.420194   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.432901   52634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.457057   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.470020   52634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:14:47.480226   52634 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1201 20:14:47.480282   52634 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1201 20:14:47.503255   52634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:14:47.515273   52634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:47.661750   52634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:14:47.780382   52634 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:14:47.780501   52634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:14:47.786241   52634 start.go:564] Will wait 60s for crictl version
	I1201 20:14:47.786298   52634 ssh_runner.go:195] Run: which crictl
	I1201 20:14:47.790475   52634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1201 20:14:47.834682   52634 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1201 20:14:47.834759   52634 ssh_runner.go:195] Run: crio --version
	I1201 20:14:47.868455   52634 ssh_runner.go:195] Run: crio --version
	I1201 20:14:47.905160   52634 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1201 20:14:46.109857   52760 out.go:252] * Updating the running kvm2 "pause-092823" VM ...
	I1201 20:14:46.110055   52760 machine.go:94] provisionDockerMachine start ...
	I1201 20:14:46.113317   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.113741   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.113793   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.114012   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.114202   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.114212   52760 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:14:46.229463   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092823
	
	I1201 20:14:46.229492   52760 buildroot.go:166] provisioning hostname "pause-092823"
	I1201 20:14:46.232733   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.233167   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.233203   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.233419   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.233696   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.233710   52760 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-092823 && echo "pause-092823" | sudo tee /etc/hostname
	I1201 20:14:46.367930   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092823
	
	I1201 20:14:46.371177   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.371647   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.371687   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.371880   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.372115   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.372133   52760 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-092823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-092823/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-092823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:14:46.479675   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:14:46.479706   52760 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12903/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12903/.minikube}
	I1201 20:14:46.479731   52760 buildroot.go:174] setting up certificates
	I1201 20:14:46.479747   52760 provision.go:84] configureAuth start
	I1201 20:14:46.484077   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.484730   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.484756   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.487821   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.488406   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.488430   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.488632   52760 provision.go:143] copyHostCerts
	I1201 20:14:46.488690   52760 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem, removing ...
	I1201 20:14:46.488704   52760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem
	I1201 20:14:46.488770   52760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem (1078 bytes)
	I1201 20:14:46.488905   52760 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem, removing ...
	I1201 20:14:46.488916   52760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem
	I1201 20:14:46.488942   52760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem (1123 bytes)
	I1201 20:14:46.488997   52760 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem, removing ...
	I1201 20:14:46.489004   52760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem
	I1201 20:14:46.489029   52760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem (1675 bytes)
	I1201 20:14:46.489082   52760 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem org=jenkins.pause-092823 san=[127.0.0.1 192.168.83.165 localhost minikube pause-092823]
	I1201 20:14:46.686710   52760 provision.go:177] copyRemoteCerts
	I1201 20:14:46.686768   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:14:46.690170   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.690641   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.690677   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.690864   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:46.779314   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1201 20:14:46.819937   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 20:14:46.860475   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:14:46.894812   52760 provision.go:87] duration metric: took 415.050462ms to configureAuth
	I1201 20:14:46.894865   52760 buildroot.go:189] setting minikube options for container-runtime
	I1201 20:14:46.895130   52760 config.go:182] Loaded profile config "pause-092823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:46.898900   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.899407   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.899438   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.899741   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.900056   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.900086   52760 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1201 20:14:47.155446   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	W1201 20:14:49.156538   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	I1201 20:14:47.909197   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:47.909533   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:47.909558   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:47.909723   52634 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1201 20:14:47.914571   52634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:14:47.930354   52634 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-240409 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.2 ClusterName:default-k8s-diff-port-240409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:14:47.930525   52634 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:14:47.930587   52634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:47.961975   52634 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1201 20:14:47.962046   52634 ssh_runner.go:195] Run: which lz4
	I1201 20:14:47.966898   52634 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1201 20:14:47.971810   52634 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1201 20:14:47.971859   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1201 20:14:49.285236   52634 crio.go:462] duration metric: took 1.318369525s to copy over tarball
	I1201 20:14:49.285335   52634 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1201 20:14:47.242088   52992 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:14:47.242209   52992 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/config.json ...
	I1201 20:14:47.242238   52992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/config.json: {Name:mkd2874a8690bfa0ca1e32be6071cc44a5528829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:47.242353   52992 cache.go:107] acquiring lock: {Name:mk84dbde9ead2d8c90480eafbfe358f5ca6aa5c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242378   52992 cache.go:107] acquiring lock: {Name:mk2254117dde6fc1fafd6b7df235ae600972ce9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242406   52992 cache.go:107] acquiring lock: {Name:mkd72cc39eeccad67fc1dd1790c288bb41eb7d61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242426   52992 start.go:360] acquireMachinesLock for no-preload-931553: {Name:mka5785482004af70e425c1e38474157ff061d66 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 20:14:47.242395   52992 cache.go:107] acquiring lock: {Name:mkc6fd8399654fe6c9b82a431ef0794c2a7a4690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242402   52992 cache.go:107] acquiring lock: {Name:mkf2a81443f61667e01c303dee76734732f0b214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242455   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 20:14:47.242457   52992 cache.go:107] acquiring lock: {Name:mkcff4b34d831100ce78e43d852eb57d715d1454 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242466   52992 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 153.634µs
	I1201 20:14:47.242442   52992 cache.go:107] acquiring lock: {Name:mkd2fd755acadb41dbec175a149204e5724f4d65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242500   52992 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 20:14:47.242570   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 20:14:47.242586   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 20:14:47.242593   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 20:14:47.242597   52992 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 195.663µs
	I1201 20:14:47.242581   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 20:14:47.242608   52992 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 20:14:47.242604   52992 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 249.11µs
	I1201 20:14:47.242613   52992 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 173.914µs
	I1201 20:14:47.242623   52992 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 20:14:47.242598   52992 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 208.049µs
	I1201 20:14:47.242635   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1201 20:14:47.242637   52992 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 20:14:47.242637   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 20:14:47.242616   52992 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 20:14:47.242642   52992 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 261.605µs
	I1201 20:14:47.242651   52992 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 20:14:47.242647   52992 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 192.259µs
	I1201 20:14:47.242661   52992 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 20:14:47.242727   52992 cache.go:107] acquiring lock: {Name:mk6d9bc57e707fb535b355bc5d8318e6c5e321e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242892   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 20:14:47.242907   52992 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 228.65µs
	I1201 20:14:47.242929   52992 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 20:14:47.242944   52992 cache.go:87] Successfully saved all images to host disk.
	I1201 20:14:50.798238   52634 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.512872769s)
	I1201 20:14:50.798265   52634 crio.go:469] duration metric: took 1.512991808s to extract the tarball
	I1201 20:14:50.798274   52634 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1201 20:14:50.843142   52634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:50.892975   52634 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:14:50.893000   52634 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:14:50.893010   52634 kubeadm.go:935] updating node { 192.168.61.174 8444 v1.34.2 crio true true} ...
	I1201 20:14:50.893115   52634 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-240409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-240409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:14:50.893219   52634 ssh_runner.go:195] Run: crio config
	I1201 20:14:50.950575   52634 cni.go:84] Creating CNI manager for ""
	I1201 20:14:50.950609   52634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:14:50.950634   52634 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:14:50.950666   52634 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-240409 NodeName:default-k8s-diff-port-240409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:14:50.950860   52634 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-240409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:14:50.950943   52634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:14:50.963487   52634 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:14:50.963565   52634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:14:50.975724   52634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1201 20:14:50.996094   52634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:14:51.016562   52634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1201 20:14:51.039504   52634 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1201 20:14:51.044067   52634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:14:51.061235   52634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:51.208480   52634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:14:51.229890   52634 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409 for IP: 192.168.61.174
	I1201 20:14:51.229915   52634 certs.go:195] generating shared ca certs ...
	I1201 20:14:51.229931   52634 certs.go:227] acquiring lock for ca certs: {Name:mk7e1ff47c53decb016970932c61ce60ac92f0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.230090   52634 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key
	I1201 20:14:51.230145   52634 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key
	I1201 20:14:51.230172   52634 certs.go:257] generating profile certs ...
	I1201 20:14:51.230242   52634 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.key
	I1201 20:14:51.230261   52634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt with IP's: []
	I1201 20:14:51.341598   52634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt ...
	I1201 20:14:51.341629   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: {Name:mk1340933d479a2dcb2255abd41dc9882eb49d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.341859   52634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.key ...
	I1201 20:14:51.341877   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.key: {Name:mkb5e72caa99e3637774a4d5cadafd1d55322ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.342000   52634 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key.704906e1
	I1201 20:14:51.342028   52634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt.704906e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.174]
	I1201 20:14:51.385601   52634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt.704906e1 ...
	I1201 20:14:51.385628   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt.704906e1: {Name:mk6c64c21f2aef05c60c69fcce4b5db8312d9add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.385846   52634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key.704906e1 ...
	I1201 20:14:51.385864   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key.704906e1: {Name:mkc4bbd3995970395f2f457af1635fe000662e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.385986   52634 certs.go:382] copying /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt.704906e1 -> /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt
	I1201 20:14:51.386093   52634 certs.go:386] copying /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key.704906e1 -> /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key
	I1201 20:14:51.386183   52634 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.key
	I1201 20:14:51.386205   52634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.crt with IP's: []
	I1201 20:14:51.506418   52634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.crt ...
	I1201 20:14:51.506444   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.crt: {Name:mka309d8bdf4e1d7e03db0e4bfd44fc7378416fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.506612   52634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.key ...
	I1201 20:14:51.506624   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.key: {Name:mkd9e00b9243d5ae537f07e285172b0875022977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.506787   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem (1338 bytes)
	W1201 20:14:51.506838   52634 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868_empty.pem, impossibly tiny 0 bytes
	I1201 20:14:51.506849   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:14:51.506876   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem (1078 bytes)
	I1201 20:14:51.506900   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:14:51.506922   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem (1675 bytes)
	I1201 20:14:51.506961   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:51.507581   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:14:51.540569   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:14:51.570045   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:14:51.598996   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:14:51.628740   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1201 20:14:51.660569   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1201 20:14:51.697177   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:14:51.732493   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:14:51.766763   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /usr/share/ca-certificates/168682.pem (1708 bytes)
	I1201 20:14:51.800429   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:14:51.835992   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem --> /usr/share/ca-certificates/16868.pem (1338 bytes)
	I1201 20:14:51.866224   52634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:14:51.886506   52634 ssh_runner.go:195] Run: openssl version
	I1201 20:14:51.893423   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:14:51.908294   52634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:14:51.914065   52634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:14:51.914148   52634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:14:51.921357   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:14:51.938153   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16868.pem && ln -fs /usr/share/ca-certificates/16868.pem /etc/ssl/certs/16868.pem"
	I1201 20:14:51.954378   52634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16868.pem
	I1201 20:14:51.959773   52634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:16 /usr/share/ca-certificates/16868.pem
	I1201 20:14:51.959854   52634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16868.pem
	I1201 20:14:51.969013   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16868.pem /etc/ssl/certs/51391683.0"
	I1201 20:14:51.981761   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168682.pem && ln -fs /usr/share/ca-certificates/168682.pem /etc/ssl/certs/168682.pem"
	I1201 20:14:51.994587   52634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168682.pem
	I1201 20:14:51.999807   52634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:16 /usr/share/ca-certificates/168682.pem
	I1201 20:14:51.999902   52634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168682.pem
	I1201 20:14:52.007500   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168682.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:14:52.020509   52634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:14:52.025570   52634 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:14:52.025631   52634 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-240409 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.2 ClusterName:default-k8s-diff-port-240409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:14:52.025726   52634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:14:52.025924   52634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:14:52.063721   52634 cri.go:89] found id: ""
	I1201 20:14:52.063812   52634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:14:52.079313   52634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:14:52.092357   52634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:14:52.104730   52634 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:14:52.104751   52634 kubeadm.go:158] found existing configuration files:
	
	I1201 20:14:52.104800   52634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1201 20:14:52.116234   52634 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:14:52.116304   52634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:14:52.128749   52634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1201 20:14:52.140246   52634 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:14:52.140316   52634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:14:52.153560   52634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1201 20:14:52.165005   52634 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:14:52.165078   52634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:14:52.176925   52634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1201 20:14:52.188508   52634 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:14:52.188575   52634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:14:52.204974   52634 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1201 20:14:52.276619   52634 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 20:14:52.276757   52634 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:14:52.386811   52634 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:14:52.386984   52634 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:14:52.387145   52634 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:14:52.398245   52634 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 20:14:52.736723   52992 start.go:364] duration metric: took 5.494244255s to acquireMachinesLock for "no-preload-931553"
	I1201 20:14:52.736800   52992 start.go:93] Provisioning new machine with config: &{Name:no-preload-931553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-931553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:14:52.736959   52992 start.go:125] createHost starting for "" (driver="kvm2")
	I1201 20:14:52.488148   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:14:52.488175   52760 machine.go:97] duration metric: took 6.37809979s to provisionDockerMachine
	I1201 20:14:52.488187   52760 start.go:293] postStartSetup for "pause-092823" (driver="kvm2")
	I1201 20:14:52.488195   52760 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:14:52.488261   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:14:52.491556   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.492162   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.492208   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.492403   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:52.576626   52760 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:14:52.581811   52760 info.go:137] Remote host: Buildroot 2025.02.8
	I1201 20:14:52.581846   52760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/addons for local assets ...
	I1201 20:14:52.581909   52760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/files for local assets ...
	I1201 20:14:52.582032   52760 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem -> 168682.pem in /etc/ssl/certs
	I1201 20:14:52.582295   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:14:52.594982   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:52.625436   52760 start.go:296] duration metric: took 137.237635ms for postStartSetup
	I1201 20:14:52.625470   52760 fix.go:56] duration metric: took 6.523253983s for fixHost
	I1201 20:14:52.628161   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.628602   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.628625   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.628891   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:52.629082   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:52.629092   52760 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1201 20:14:52.736550   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764620092.731694487
	
	I1201 20:14:52.736575   52760 fix.go:216] guest clock: 1764620092.731694487
	I1201 20:14:52.736585   52760 fix.go:229] Guest: 2025-12-01 20:14:52.731694487 +0000 UTC Remote: 2025-12-01 20:14:52.625474204 +0000 UTC m=+18.993217111 (delta=106.220283ms)
	I1201 20:14:52.736604   52760 fix.go:200] guest clock delta is within tolerance: 106.220283ms
	I1201 20:14:52.736610   52760 start.go:83] releasing machines lock for "pause-092823", held for 6.634421762s
	I1201 20:14:52.740375   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.740960   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.740994   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.741631   52760 ssh_runner.go:195] Run: cat /version.json
	I1201 20:14:52.741783   52760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:14:52.745853   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.745875   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.746286   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.746314   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.746289   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.746401   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.746529   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:52.746712   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:52.824741   52760 ssh_runner.go:195] Run: systemctl --version
	I1201 20:14:52.864010   52760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:14:53.029292   52760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:14:53.039060   52760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:14:53.039157   52760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:14:53.051696   52760 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:14:53.051726   52760 start.go:496] detecting cgroup driver to use...
	I1201 20:14:53.051817   52760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:14:53.084736   52760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:14:53.105317   52760 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:14:53.105382   52760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:14:53.127096   52760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:14:53.146720   52760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:14:53.342461   52760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:14:53.535204   52760 docker.go:234] disabling docker service ...
	I1201 20:14:53.535269   52760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:14:53.570923   52760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:14:53.590115   52760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	W1201 20:14:51.654622   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	W1201 20:14:53.657064   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	I1201 20:14:52.399821   52634 out.go:252]   - Generating certificates and keys ...
	I1201 20:14:52.399920   52634 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:14:52.399989   52634 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:14:53.722898   52634 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:14:53.921250   52634 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:14:54.350121   52634 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:14:54.517731   52634 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:14:54.912311   52634 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:14:54.912720   52634 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-240409 localhost] and IPs [192.168.61.174 127.0.0.1 ::1]
	I1201 20:14:55.264355   52634 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:14:55.264627   52634 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-240409 localhost] and IPs [192.168.61.174 127.0.0.1 ::1]
	I1201 20:14:55.510470   52634 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 20:14:55.542577   52634 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 20:14:55.737314   52634 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 20:14:55.737464   52634 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 20:14:55.819568   52634 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 20:14:55.950058   52634 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 20:14:56.413524   52634 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 20:14:56.882736   52634 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 20:14:57.096140   52634 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 20:14:57.096762   52634 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 20:14:57.101975   52634 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 20:14:52.739150   52992 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 20:14:52.739405   52992 start.go:159] libmachine.API.Create for "no-preload-931553" (driver="kvm2")
	I1201 20:14:52.739448   52992 client.go:173] LocalClient.Create starting
	I1201 20:14:52.739555   52992 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem
	I1201 20:14:52.739605   52992 main.go:143] libmachine: Decoding PEM data...
	I1201 20:14:52.739632   52992 main.go:143] libmachine: Parsing certificate...
	I1201 20:14:52.739722   52992 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem
	I1201 20:14:52.739756   52992 main.go:143] libmachine: Decoding PEM data...
	I1201 20:14:52.739774   52992 main.go:143] libmachine: Parsing certificate...
	I1201 20:14:52.740315   52992 main.go:143] libmachine: creating domain...
	I1201 20:14:52.740336   52992 main.go:143] libmachine: creating network...
	I1201 20:14:52.742116   52992 main.go:143] libmachine: found existing default network
	I1201 20:14:52.742565   52992 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1201 20:14:52.744501   52992 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:b8:b4} reservation:<nil>}
	I1201 20:14:52.745293   52992 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:49:0b:51} reservation:<nil>}
	I1201 20:14:52.746341   52992 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:ff:87} reservation:<nil>}
	I1201 20:14:52.747608   52992 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebe120}
	I1201 20:14:52.747716   52992 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-no-preload-931553</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1201 20:14:52.753708   52992 main.go:143] libmachine: creating private network mk-no-preload-931553 192.168.72.0/24...
	I1201 20:14:52.845143   52992 main.go:143] libmachine: private network mk-no-preload-931553 192.168.72.0/24 created
	I1201 20:14:52.845512   52992 main.go:143] libmachine: <network>
	  <name>mk-no-preload-931553</name>
	  <uuid>258ac5e3-cb60-4228-9bb3-eded68491e19</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:43:9e:bd'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1201 20:14:52.845558   52992 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553 ...
	I1201 20:14:52.845590   52992 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21997-12903/.minikube/cache/iso/amd64/minikube-v1.37.0-1764600683-21997-amd64.iso
	I1201 20:14:52.845602   52992 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 20:14:52.845675   52992 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21997-12903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21997-12903/.minikube/cache/iso/amd64/minikube-v1.37.0-1764600683-21997-amd64.iso...
	I1201 20:14:53.080568   52992 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/id_rsa...
	I1201 20:14:53.129635   52992 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/no-preload-931553.rawdisk...
	I1201 20:14:53.129677   52992 main.go:143] libmachine: Writing magic tar header
	I1201 20:14:53.129705   52992 main.go:143] libmachine: Writing SSH key tar header
	I1201 20:14:53.129844   52992 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553 ...
	I1201 20:14:53.129943   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553
	I1201 20:14:53.129972   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553 (perms=drwx------)
	I1201 20:14:53.129988   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube/machines
	I1201 20:14:53.130007   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube/machines (perms=drwxr-xr-x)
	I1201 20:14:53.130028   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 20:14:53.130045   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube (perms=drwxr-xr-x)
	I1201 20:14:53.130058   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903
	I1201 20:14:53.130071   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903 (perms=drwxrwxr-x)
	I1201 20:14:53.130079   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1201 20:14:53.130090   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1201 20:14:53.130105   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1201 20:14:53.130122   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1201 20:14:53.130136   52992 main.go:143] libmachine: checking permissions on dir: /home
	I1201 20:14:53.130149   52992 main.go:143] libmachine: skipping /home - not owner
	I1201 20:14:53.130157   52992 main.go:143] libmachine: defining domain...
	I1201 20:14:53.131637   52992 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>no-preload-931553</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/no-preload-931553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-no-preload-931553'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1201 20:14:53.137460   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:62:05:cc in network default
	I1201 20:14:53.138149   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:53.138173   52992 main.go:143] libmachine: starting domain...
	I1201 20:14:53.138180   52992 main.go:143] libmachine: ensuring networks are active...
	I1201 20:14:53.139128   52992 main.go:143] libmachine: Ensuring network default is active
	I1201 20:14:53.139621   52992 main.go:143] libmachine: Ensuring network mk-no-preload-931553 is active
	I1201 20:14:53.140569   52992 main.go:143] libmachine: getting domain XML...
	I1201 20:14:53.141900   52992 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>no-preload-931553</name>
	  <uuid>08df0a42-3710-4a20-9b3d-ff0dc04b7fcc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/no-preload-931553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ac:7f:f0'/>
	      <source network='mk-no-preload-931553'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:62:05:cc'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1201 20:14:55.109305   52992 main.go:143] libmachine: waiting for domain to start...
	I1201 20:14:55.110955   52992 main.go:143] libmachine: domain is now running
	I1201 20:14:55.110978   52992 main.go:143] libmachine: waiting for IP...
	I1201 20:14:55.111861   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:55.112595   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:55.112612   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:55.113055   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:55.113099   52992 retry.go:31] will retry after 197.950965ms: waiting for domain to come up
	I1201 20:14:55.312483   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:55.313263   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:55.313279   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:55.313685   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:55.313723   52992 retry.go:31] will retry after 277.642131ms: waiting for domain to come up
	I1201 20:14:55.593129   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:55.593759   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:55.593773   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:55.594138   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:55.594170   52992 retry.go:31] will retry after 352.723475ms: waiting for domain to come up
	I1201 20:14:55.949067   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:55.949849   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:55.949871   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:55.950467   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:55.950524   52992 retry.go:31] will retry after 559.448705ms: waiting for domain to come up
	I1201 20:14:56.511505   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:56.512360   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:56.512381   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:56.512896   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:56.512952   52992 retry.go:31] will retry after 668.010634ms: waiting for domain to come up
	I1201 20:14:57.183446   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:53.786626   52760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:14:53.946106   52760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:14:53.963354   52760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:14:53.989355   52760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:14:53.989416   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.002464   52760 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:14:54.002540   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.016206   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.029704   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.043723   52760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:14:54.057882   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.070957   52760 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.087846   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.101197   52760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:14:54.112550   52760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:14:54.124663   52760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:54.311031   52760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:14:59.173899   52760 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.862827849s)
	I1201 20:14:59.173940   52760 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:14:59.174012   52760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:14:59.179694   52760 start.go:564] Will wait 60s for crictl version
	I1201 20:14:59.179756   52760 ssh_runner.go:195] Run: which crictl
	I1201 20:14:59.184414   52760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1201 20:14:59.230118   52760 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1201 20:14:59.230230   52760 ssh_runner.go:195] Run: crio --version
	I1201 20:14:59.263521   52760 ssh_runner.go:195] Run: crio --version
	I1201 20:14:59.300977   52760 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	W1201 20:14:56.154440   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	W1201 20:14:58.155139   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	I1201 20:14:59.656592   51918 pod_ready.go:94] pod "coredns-66bc5c9577-rr2vl" is "Ready"
	I1201 20:14:59.656624   51918 pod_ready.go:86] duration metric: took 41.008621411s for pod "coredns-66bc5c9577-rr2vl" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.656636   51918 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xx6hx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.661405   51918 pod_ready.go:99] pod "coredns-66bc5c9577-xx6hx" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-xx6hx" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-xx6hx" not found
	I1201 20:14:59.661433   51918 pod_ready.go:86] duration metric: took 4.788543ms for pod "coredns-66bc5c9577-xx6hx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.664999   51918 pod_ready.go:83] waiting for pod "etcd-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.670850   51918 pod_ready.go:94] pod "etcd-embed-certs-200621" is "Ready"
	I1201 20:14:59.670880   51918 pod_ready.go:86] duration metric: took 5.847545ms for pod "etcd-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.673898   51918 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.680335   51918 pod_ready.go:94] pod "kube-apiserver-embed-certs-200621" is "Ready"
	I1201 20:14:59.680362   51918 pod_ready.go:86] duration metric: took 6.440324ms for pod "kube-apiserver-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.683852   51918 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:57.104672   52634 out.go:252]   - Booting up control plane ...
	I1201 20:14:57.104781   52634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 20:14:57.104885   52634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 20:14:57.104984   52634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 20:14:57.122694   52634 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 20:14:57.122856   52634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 20:14:57.132993   52634 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 20:14:57.133269   52634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 20:14:57.133340   52634 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 20:14:57.311524   52634 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 20:14:57.311684   52634 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 20:14:58.313692   52634 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001883529s
	I1201 20:14:58.316806   52634 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 20:14:58.316966   52634 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.174:8444/livez
	I1201 20:14:58.317099   52634 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 20:14:58.317696   52634 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 20:15:00.053802   51918 pod_ready.go:94] pod "kube-controller-manager-embed-certs-200621" is "Ready"
	I1201 20:15:00.053850   51918 pod_ready.go:86] duration metric: took 369.975229ms for pod "kube-controller-manager-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:00.254155   51918 pod_ready.go:83] waiting for pod "kube-proxy-n6llm" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:00.653581   51918 pod_ready.go:94] pod "kube-proxy-n6llm" is "Ready"
	I1201 20:15:00.653610   51918 pod_ready.go:86] duration metric: took 399.418914ms for pod "kube-proxy-n6llm" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:00.854535   51918 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:01.252999   51918 pod_ready.go:94] pod "kube-scheduler-embed-certs-200621" is "Ready"
	I1201 20:15:01.253032   51918 pod_ready.go:86] duration metric: took 398.463153ms for pod "kube-scheduler-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:01.253047   51918 pod_ready.go:40] duration metric: took 42.615846328s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:15:01.324407   51918 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:15:01.326276   51918 out.go:179] * Done! kubectl is now configured to use "embed-certs-200621" cluster and "default" namespace by default
	I1201 20:14:57.184380   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:57.184403   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:57.185015   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:57.185100   52992 retry.go:31] will retry after 857.008522ms: waiting for domain to come up
	I1201 20:14:58.044515   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:58.045181   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:58.045200   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:58.045608   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:58.045645   52992 retry.go:31] will retry after 905.95419ms: waiting for domain to come up
	I1201 20:14:58.953709   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:58.954618   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:58.954639   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:58.955146   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:58.955208   52992 retry.go:31] will retry after 1.408670452s: waiting for domain to come up
	I1201 20:15:00.365669   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:15:00.366611   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:15:00.366634   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:15:00.367163   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:15:00.367202   52992 retry.go:31] will retry after 1.833049132s: waiting for domain to come up
	I1201 20:14:59.306339   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:59.306927   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:59.306959   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:59.307206   52760 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1201 20:14:59.313485   52760 kubeadm.go:884] updating cluster {Name:pause-092823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.165 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:14:59.313668   52760 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:14:59.313743   52760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:59.361359   52760 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:14:59.361384   52760 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:14:59.361439   52760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:59.400162   52760 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:14:59.400196   52760 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:14:59.400206   52760 kubeadm.go:935] updating node { 192.168.83.165 8443 v1.34.2 crio true true} ...
	I1201 20:14:59.400427   52760 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-092823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:14:59.400636   52760 ssh_runner.go:195] Run: crio config
	I1201 20:14:59.465906   52760 cni.go:84] Creating CNI manager for ""
	I1201 20:14:59.465932   52760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:14:59.465953   52760 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:14:59.465980   52760 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.165 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-092823 NodeName:pause-092823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:14:59.466141   52760 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-092823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.165"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.165"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:14:59.466219   52760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:14:59.483088   52760 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:14:59.483155   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:14:59.498563   52760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1201 20:14:59.524693   52760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:14:59.553688   52760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1201 20:14:59.581709   52760 ssh_runner.go:195] Run: grep 192.168.83.165	control-plane.minikube.internal$ /etc/hosts
	I1201 20:14:59.586499   52760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:59.815049   52760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:14:59.839071   52760 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823 for IP: 192.168.83.165
	I1201 20:14:59.839094   52760 certs.go:195] generating shared ca certs ...
	I1201 20:14:59.839113   52760 certs.go:227] acquiring lock for ca certs: {Name:mk7e1ff47c53decb016970932c61ce60ac92f0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:59.839291   52760 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key
	I1201 20:14:59.839352   52760 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key
	I1201 20:14:59.839363   52760 certs.go:257] generating profile certs ...
	I1201 20:14:59.839525   52760 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/client.key
	I1201 20:14:59.839599   52760 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/apiserver.key.467a48e8
	I1201 20:14:59.839653   52760 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/proxy-client.key
	I1201 20:14:59.839841   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem (1338 bytes)
	W1201 20:14:59.839889   52760 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868_empty.pem, impossibly tiny 0 bytes
	I1201 20:14:59.839907   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:14:59.839940   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem (1078 bytes)
	I1201 20:14:59.839972   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:14:59.840008   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem (1675 bytes)
	I1201 20:14:59.840079   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:59.840980   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:14:59.877975   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:14:59.920143   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:14:59.958913   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:14:59.998579   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 20:15:00.040718   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:15:00.079238   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:15:00.115945   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:15:00.156664   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem --> /usr/share/ca-certificates/16868.pem (1338 bytes)
	I1201 20:15:00.254482   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /usr/share/ca-certificates/168682.pem (1708 bytes)
	I1201 20:15:00.314684   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:15:00.423437   52760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:15:00.526481   52760 ssh_runner.go:195] Run: openssl version
	I1201 20:15:00.562753   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16868.pem && ln -fs /usr/share/ca-certificates/16868.pem /etc/ssl/certs/16868.pem"
	I1201 20:15:00.613901   52760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16868.pem
	I1201 20:15:00.633573   52760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:16 /usr/share/ca-certificates/16868.pem
	I1201 20:15:00.633642   52760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16868.pem
	I1201 20:15:00.654970   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16868.pem /etc/ssl/certs/51391683.0"
	I1201 20:15:00.678714   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168682.pem && ln -fs /usr/share/ca-certificates/168682.pem /etc/ssl/certs/168682.pem"
	I1201 20:15:00.707854   52760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168682.pem
	I1201 20:15:00.718201   52760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:16 /usr/share/ca-certificates/168682.pem
	I1201 20:15:00.718272   52760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168682.pem
	I1201 20:15:00.734107   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168682.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:15:00.765014   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:15:00.802005   52760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:15:00.814660   52760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:15:00.814737   52760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:15:00.828225   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:15:00.851528   52760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:15:00.862966   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:15:00.879649   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:15:00.896138   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:15:00.912901   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:15:00.931391   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:15:00.949981   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:15:00.974116   52760 kubeadm.go:401] StartCluster: {Name:pause-092823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.165 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:15:00.974237   52760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:15:00.974317   52760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:15:01.099401   52760 cri.go:89] found id: "b06d0a76963ef52a5089e4fd9f7322e4c132cc7cef44a0a08aa8f5cdb5a85721"
	I1201 20:15:01.099424   52760 cri.go:89] found id: "57ec1c4b4eeef1781fa5e1f415b57c209059e745e9ed9ebd9f95ed9977eb2e49"
	I1201 20:15:01.099430   52760 cri.go:89] found id: "c903efca5f1e25a2e14a9d7025e07e8179e959d210c875262c49b8986bddc200"
	I1201 20:15:01.099435   52760 cri.go:89] found id: "cc26867cc3d6b7f6b97323de740dcc9dcc89282ab532326b58cba0dd488bb014"
	I1201 20:15:01.099439   52760 cri.go:89] found id: "163dc1a002a3236d0a9e0a45f1ad098210f847d393a710d6b68d41c70b87fc74"
	I1201 20:15:01.099443   52760 cri.go:89] found id: "2fa21d457805dc989eb2a8ef14f2955ec09a508de2674c5bc92ce8b6542a5051"
	I1201 20:15:01.099449   52760 cri.go:89] found id: "03cfd9a73d71461c21c0c9c0c15a1ee0ccc6a97d33909a53fa38e88ce0b2deae"
	I1201 20:15:01.099454   52760 cri.go:89] found id: ""
	I1201 20:15:01.099504   52760 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-092823 -n pause-092823
helpers_test.go:269: (dbg) Run:  kubectl --context pause-092823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-092823 -n pause-092823
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-092823 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-092823 logs -n 25: (1.384253544s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-539916                                                                                                                                                                                               │ old-k8s-version-539916       │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ start   │ -p stopped-upgrade-921033 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-921033       │ jenkins │ v1.35.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ start   │ -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-903802    │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │                     │
	│ start   │ -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                           │ kubernetes-upgrade-903802    │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ stop    │ stopped-upgrade-921033 stop                                                                                                                                                                                             │ stopped-upgrade-921033       │ jenkins │ v1.35.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ start   │ -p stopped-upgrade-921033 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-921033       │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:13 UTC │
	│ delete  │ -p kubernetes-upgrade-903802                                                                                                                                                                                            │ kubernetes-upgrade-903802    │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ delete  │ -p disable-driver-mounts-893069                                                                                                                                                                                         │ disable-driver-mounts-893069 │ jenkins │ v1.37.0 │ 01 Dec 25 20:12 UTC │ 01 Dec 25 20:12 UTC │
	│ start   │ -p pause-092823 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-092823                 │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:14 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-921033 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-921033       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │                     │
	│ delete  │ -p stopped-upgrade-921033                                                                                                                                                                                               │ stopped-upgrade-921033       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:13 UTC │
	│ start   │ -p embed-certs-200621 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2                                                                    │ embed-certs-200621           │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:15 UTC │
	│ start   │ -p cert-expiration-769037 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                 │ cert-expiration-769037       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:14 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-399758 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-399758       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │                     │
	│ delete  │ -p running-upgrade-399758                                                                                                                                                                                               │ running-upgrade-399758       │ jenkins │ v1.37.0 │ 01 Dec 25 20:13 UTC │ 01 Dec 25 20:14 UTC │
	│ start   │ -p cert-options-495506 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-495506          │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ delete  │ -p cert-expiration-769037                                                                                                                                                                                               │ cert-expiration-769037       │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ start   │ -p default-k8s-diff-port-240409 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2                                                  │ default-k8s-diff-port-240409 │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │                     │
	│ start   │ -p pause-092823 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-092823                 │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:15 UTC │
	│ ssh     │ cert-options-495506 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-495506          │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ ssh     │ -p cert-options-495506 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-495506          │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ delete  │ -p cert-options-495506                                                                                                                                                                                                  │ cert-options-495506          │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │ 01 Dec 25 20:14 UTC │
	│ start   │ -p no-preload-931553 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                            │ no-preload-931553            │ jenkins │ v1.37.0 │ 01 Dec 25 20:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-200621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                │ embed-certs-200621           │ jenkins │ v1.37.0 │ 01 Dec 25 20:15 UTC │ 01 Dec 25 20:15 UTC │
	│ stop    │ -p embed-certs-200621 --alsologtostderr -v=3                                                                                                                                                                            │ embed-certs-200621           │ jenkins │ v1.37.0 │ 01 Dec 25 20:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 20:14:47
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 20:14:47.183527   52992 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:14:47.183745   52992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:14:47.183754   52992 out.go:374] Setting ErrFile to fd 2...
	I1201 20:14:47.183758   52992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:14:47.184009   52992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 20:14:47.184471   52992 out.go:368] Setting JSON to false
	I1201 20:14:47.185355   52992 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7030,"bootTime":1764613057,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:14:47.185409   52992 start.go:143] virtualization: kvm guest
	I1201 20:14:47.187377   52992 out.go:179] * [no-preload-931553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:14:47.188646   52992 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:14:47.188657   52992 notify.go:221] Checking for updates...
	I1201 20:14:47.191186   52992 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:14:47.192560   52992 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 20:14:47.193868   52992 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 20:14:47.194991   52992 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:14:47.196291   52992 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:14:47.198302   52992 config.go:182] Loaded profile config "default-k8s-diff-port-240409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:47.198457   52992 config.go:182] Loaded profile config "embed-certs-200621": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:47.198576   52992 config.go:182] Loaded profile config "guest-790070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1201 20:14:47.198779   52992 config.go:182] Loaded profile config "pause-092823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:47.198939   52992 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:14:47.236736   52992 out.go:179] * Using the kvm2 driver based on user configuration
	I1201 20:14:47.237917   52992 start.go:309] selected driver: kvm2
	I1201 20:14:47.237941   52992 start.go:927] validating driver "kvm2" against <nil>
	I1201 20:14:47.237965   52992 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:14:47.238685   52992 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 20:14:47.238999   52992 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1201 20:14:47.239030   52992 cni.go:84] Creating CNI manager for ""
	I1201 20:14:47.239074   52992 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:14:47.239083   52992 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 20:14:47.239153   52992 start.go:353] cluster config:
	{Name:no-preload-931553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-931553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:14:47.239302   52992 iso.go:125] acquiring lock: {Name:mk6a50ce57553a723db22dad35f70cd00228e9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.240953   52992 out.go:179] * Starting "no-preload-931553" primary control-plane node in "no-preload-931553" cluster
	I1201 20:14:45.167420   52634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:14:45.167454   52634 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12903/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12903/.minikube}
	I1201 20:14:45.167481   52634 buildroot.go:174] setting up certificates
	I1201 20:14:45.167495   52634 provision.go:84] configureAuth start
	I1201 20:14:45.170546   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.171136   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.171163   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.174074   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.174542   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.174568   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.174763   52634 provision.go:143] copyHostCerts
	I1201 20:14:45.174871   52634 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem, removing ...
	I1201 20:14:45.174888   52634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem
	I1201 20:14:45.175462   52634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem (1078 bytes)
	I1201 20:14:45.175567   52634 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem, removing ...
	I1201 20:14:45.175576   52634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem
	I1201 20:14:45.175608   52634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem (1123 bytes)
	I1201 20:14:45.175696   52634 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem, removing ...
	I1201 20:14:45.175706   52634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem
	I1201 20:14:45.175730   52634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem (1675 bytes)
	I1201 20:14:45.175779   52634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-240409 san=[127.0.0.1 192.168.61.174 default-k8s-diff-port-240409 localhost minikube]
	I1201 20:14:45.303247   52634 provision.go:177] copyRemoteCerts
	I1201 20:14:45.303300   52634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:14:45.306523   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.307034   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.307073   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.307239   52634 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/default-k8s-diff-port-240409/id_rsa Username:docker}
	I1201 20:14:45.403134   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1201 20:14:45.441176   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1201 20:14:45.484059   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:14:45.516999   52634 provision.go:87] duration metric: took 349.489118ms to configureAuth
	I1201 20:14:45.517040   52634 buildroot.go:189] setting minikube options for container-runtime
	I1201 20:14:45.517313   52634 config.go:182] Loaded profile config "default-k8s-diff-port-240409": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:45.520969   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.521607   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.521650   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.521894   52634 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:45.522168   52634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1201 20:14:45.522211   52634 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1201 20:14:45.817359   52634 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:14:45.817394   52634 machine.go:97] duration metric: took 1.068512033s to provisionDockerMachine
	I1201 20:14:45.817406   52634 client.go:176] duration metric: took 20.135801226s to LocalClient.Create
	I1201 20:14:45.817421   52634 start.go:167] duration metric: took 20.135870719s to libmachine.API.Create "default-k8s-diff-port-240409"
	I1201 20:14:45.817431   52634 start.go:293] postStartSetup for "default-k8s-diff-port-240409" (driver="kvm2")
	I1201 20:14:45.817443   52634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:14:45.817519   52634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:14:45.821191   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.821768   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.821807   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.822038   52634 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/default-k8s-diff-port-240409/id_rsa Username:docker}
	I1201 20:14:45.917327   52634 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:14:45.922592   52634 info.go:137] Remote host: Buildroot 2025.02.8
	I1201 20:14:45.922620   52634 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/addons for local assets ...
	I1201 20:14:45.922722   52634 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/files for local assets ...
	I1201 20:14:45.922888   52634 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem -> 168682.pem in /etc/ssl/certs
	I1201 20:14:45.923041   52634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:14:45.935359   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:45.971132   52634 start.go:296] duration metric: took 153.685278ms for postStartSetup
	I1201 20:14:45.974892   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.975596   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.975640   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.976018   52634 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/config.json ...
	I1201 20:14:45.976317   52634 start.go:128] duration metric: took 20.297099778s to createHost
	I1201 20:14:45.979316   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.979884   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:45.979922   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:45.980143   52634 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:45.980410   52634 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.61.174 22 <nil> <nil>}
	I1201 20:14:45.980422   52634 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1201 20:14:46.101999   52634 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764620086.053227742
	
	I1201 20:14:46.102026   52634 fix.go:216] guest clock: 1764620086.053227742
	I1201 20:14:46.102036   52634 fix.go:229] Guest: 2025-12-01 20:14:46.053227742 +0000 UTC Remote: 2025-12-01 20:14:45.976335891 +0000 UTC m=+20.928515406 (delta=76.891851ms)
	I1201 20:14:46.102057   52634 fix.go:200] guest clock delta is within tolerance: 76.891851ms
	I1201 20:14:46.102065   52634 start.go:83] releasing machines lock for "default-k8s-diff-port-240409", held for 20.423107523s
	I1201 20:14:46.105969   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.106390   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:46.106417   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.106984   52634 ssh_runner.go:195] Run: cat /version.json
	I1201 20:14:46.107080   52634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:14:46.111016   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.111379   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.111503   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:46.111570   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.111843   52634 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/default-k8s-diff-port-240409/id_rsa Username:docker}
	I1201 20:14:46.112197   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:46.112230   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:46.112414   52634 sshutil.go:53] new ssh client: &{IP:192.168.61.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/default-k8s-diff-port-240409/id_rsa Username:docker}
	I1201 20:14:46.196040   52634 ssh_runner.go:195] Run: systemctl --version
	I1201 20:14:46.237058   52634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:14:46.423020   52634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:14:46.430998   52634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:14:46.431066   52634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:14:46.454197   52634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1201 20:14:46.454231   52634 start.go:496] detecting cgroup driver to use...
	I1201 20:14:46.454306   52634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:14:46.473074   52634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:14:46.495781   52634 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:14:46.495855   52634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:14:46.516376   52634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:14:46.540083   52634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:14:46.713375   52634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:14:46.937030   52634 docker.go:234] disabling docker service ...
	I1201 20:14:46.937104   52634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:14:46.956633   52634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:14:46.973154   52634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1201 20:14:47.157385   52634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:14:47.312983   52634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:14:47.334236   52634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:14:47.356621   52634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:14:47.356692   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.369199   52634 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:14:47.369268   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.381552   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.393491   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.406008   52634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:14:47.420194   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.432901   52634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.457057   52634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:47.470020   52634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:14:47.480226   52634 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1201 20:14:47.480282   52634 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1201 20:14:47.503255   52634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:14:47.515273   52634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:47.661750   52634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:14:47.780382   52634 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:14:47.780501   52634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:14:47.786241   52634 start.go:564] Will wait 60s for crictl version
	I1201 20:14:47.786298   52634 ssh_runner.go:195] Run: which crictl
	I1201 20:14:47.790475   52634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1201 20:14:47.834682   52634 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1201 20:14:47.834759   52634 ssh_runner.go:195] Run: crio --version
	I1201 20:14:47.868455   52634 ssh_runner.go:195] Run: crio --version
	I1201 20:14:47.905160   52634 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1201 20:14:46.109857   52760 out.go:252] * Updating the running kvm2 "pause-092823" VM ...
	I1201 20:14:46.110055   52760 machine.go:94] provisionDockerMachine start ...
	I1201 20:14:46.113317   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.113741   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.113793   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.114012   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.114202   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.114212   52760 main.go:143] libmachine: About to run SSH command:
	hostname
	I1201 20:14:46.229463   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092823
	
	I1201 20:14:46.229492   52760 buildroot.go:166] provisioning hostname "pause-092823"
	I1201 20:14:46.232733   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.233167   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.233203   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.233419   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.233696   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.233710   52760 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-092823 && echo "pause-092823" | sudo tee /etc/hostname
	I1201 20:14:46.367930   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-092823
	
	I1201 20:14:46.371177   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.371647   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.371687   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.371880   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.372115   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.372133   52760 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-092823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-092823/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-092823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1201 20:14:46.479675   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1201 20:14:46.479706   52760 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-12903/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-12903/.minikube}
	I1201 20:14:46.479731   52760 buildroot.go:174] setting up certificates
	I1201 20:14:46.479747   52760 provision.go:84] configureAuth start
	I1201 20:14:46.484077   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.484730   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.484756   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.487821   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.488406   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.488430   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.488632   52760 provision.go:143] copyHostCerts
	I1201 20:14:46.488690   52760 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem, removing ...
	I1201 20:14:46.488704   52760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem
	I1201 20:14:46.488770   52760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/ca.pem (1078 bytes)
	I1201 20:14:46.488905   52760 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem, removing ...
	I1201 20:14:46.488916   52760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem
	I1201 20:14:46.488942   52760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/cert.pem (1123 bytes)
	I1201 20:14:46.488997   52760 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem, removing ...
	I1201 20:14:46.489004   52760 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem
	I1201 20:14:46.489029   52760 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-12903/.minikube/key.pem (1675 bytes)
	I1201 20:14:46.489082   52760 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem org=jenkins.pause-092823 san=[127.0.0.1 192.168.83.165 localhost minikube pause-092823]
	I1201 20:14:46.686710   52760 provision.go:177] copyRemoteCerts
	I1201 20:14:46.686768   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1201 20:14:46.690170   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.690641   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.690677   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.690864   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:46.779314   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1201 20:14:46.819937   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1201 20:14:46.860475   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1201 20:14:46.894812   52760 provision.go:87] duration metric: took 415.050462ms to configureAuth
	I1201 20:14:46.894865   52760 buildroot.go:189] setting minikube options for container-runtime
	I1201 20:14:46.895130   52760 config.go:182] Loaded profile config "pause-092823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:14:46.898900   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.899407   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:46.899438   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:46.899741   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:46.900056   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:46.900086   52760 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1201 20:14:47.155446   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	W1201 20:14:49.156538   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	I1201 20:14:47.909197   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:47.909533   52634 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e4:a6:1c", ip: ""} in network mk-default-k8s-diff-port-240409: {Iface:virbr3 ExpiryTime:2025-12-01 21:14:42 +0000 UTC Type:0 Mac:52:54:00:e4:a6:1c Iaid: IPaddr:192.168.61.174 Prefix:24 Hostname:default-k8s-diff-port-240409 Clientid:01:52:54:00:e4:a6:1c}
	I1201 20:14:47.909558   52634 main.go:143] libmachine: domain default-k8s-diff-port-240409 has defined IP address 192.168.61.174 and MAC address 52:54:00:e4:a6:1c in network mk-default-k8s-diff-port-240409
	I1201 20:14:47.909723   52634 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1201 20:14:47.914571   52634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:14:47.930354   52634 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-240409 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.2 ClusterName:default-k8s-diff-port-240409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:14:47.930525   52634 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:14:47.930587   52634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:47.961975   52634 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1201 20:14:47.962046   52634 ssh_runner.go:195] Run: which lz4
	I1201 20:14:47.966898   52634 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1201 20:14:47.971810   52634 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1201 20:14:47.971859   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1201 20:14:49.285236   52634 crio.go:462] duration metric: took 1.318369525s to copy over tarball
	I1201 20:14:49.285335   52634 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1201 20:14:47.242088   52992 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1201 20:14:47.242209   52992 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/config.json ...
	I1201 20:14:47.242238   52992 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/config.json: {Name:mkd2874a8690bfa0ca1e32be6071cc44a5528829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:47.242353   52992 cache.go:107] acquiring lock: {Name:mk84dbde9ead2d8c90480eafbfe358f5ca6aa5c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242378   52992 cache.go:107] acquiring lock: {Name:mk2254117dde6fc1fafd6b7df235ae600972ce9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242406   52992 cache.go:107] acquiring lock: {Name:mkd72cc39eeccad67fc1dd1790c288bb41eb7d61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242426   52992 start.go:360] acquireMachinesLock for no-preload-931553: {Name:mka5785482004af70e425c1e38474157ff061d66 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1201 20:14:47.242395   52992 cache.go:107] acquiring lock: {Name:mkc6fd8399654fe6c9b82a431ef0794c2a7a4690 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242402   52992 cache.go:107] acquiring lock: {Name:mkf2a81443f61667e01c303dee76734732f0b214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242455   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 exists
	I1201 20:14:47.242457   52992 cache.go:107] acquiring lock: {Name:mkcff4b34d831100ce78e43d852eb57d715d1454 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242466   52992 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0" took 153.634µs
	I1201 20:14:47.242442   52992 cache.go:107] acquiring lock: {Name:mkd2fd755acadb41dbec175a149204e5724f4d65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242500   52992 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 succeeded
	I1201 20:14:47.242570   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 exists
	I1201 20:14:47.242586   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1201 20:14:47.242593   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 exists
	I1201 20:14:47.242597   52992 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0" took 195.663µs
	I1201 20:14:47.242581   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 exists
	I1201 20:14:47.242608   52992 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 succeeded
	I1201 20:14:47.242604   52992 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0" took 249.11µs
	I1201 20:14:47.242613   52992 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0" took 173.914µs
	I1201 20:14:47.242623   52992 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 succeeded
	I1201 20:14:47.242598   52992 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 208.049µs
	I1201 20:14:47.242635   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1201 20:14:47.242637   52992 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1201 20:14:47.242637   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 exists
	I1201 20:14:47.242616   52992 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 succeeded
	I1201 20:14:47.242642   52992 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 261.605µs
	I1201 20:14:47.242651   52992 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1201 20:14:47.242647   52992 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0" took 192.259µs
	I1201 20:14:47.242661   52992 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 succeeded
	I1201 20:14:47.242727   52992 cache.go:107] acquiring lock: {Name:mk6d9bc57e707fb535b355bc5d8318e6c5e321e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 20:14:47.242892   52992 cache.go:115] /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1201 20:14:47.242907   52992 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 228.65µs
	I1201 20:14:47.242929   52992 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1201 20:14:47.242944   52992 cache.go:87] Successfully saved all images to host disk.
	I1201 20:14:50.798238   52634 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.512872769s)
	I1201 20:14:50.798265   52634 crio.go:469] duration metric: took 1.512991808s to extract the tarball
	I1201 20:14:50.798274   52634 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1201 20:14:50.843142   52634 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:50.892975   52634 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:14:50.893000   52634 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:14:50.893010   52634 kubeadm.go:935] updating node { 192.168.61.174 8444 v1.34.2 crio true true} ...
	I1201 20:14:50.893115   52634 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-240409 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:default-k8s-diff-port-240409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:14:50.893219   52634 ssh_runner.go:195] Run: crio config
	I1201 20:14:50.950575   52634 cni.go:84] Creating CNI manager for ""
	I1201 20:14:50.950609   52634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:14:50.950634   52634 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:14:50.950666   52634 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.174 APIServerPort:8444 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-240409 NodeName:default-k8s-diff-port-240409 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:14:50.950860   52634 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.174
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-240409"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:14:50.950943   52634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:14:50.963487   52634 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:14:50.963565   52634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:14:50.975724   52634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1201 20:14:50.996094   52634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:14:51.016562   52634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1201 20:14:51.039504   52634 ssh_runner.go:195] Run: grep 192.168.61.174	control-plane.minikube.internal$ /etc/hosts
	I1201 20:14:51.044067   52634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1201 20:14:51.061235   52634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:51.208480   52634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:14:51.229890   52634 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409 for IP: 192.168.61.174
	I1201 20:14:51.229915   52634 certs.go:195] generating shared ca certs ...
	I1201 20:14:51.229931   52634 certs.go:227] acquiring lock for ca certs: {Name:mk7e1ff47c53decb016970932c61ce60ac92f0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.230090   52634 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key
	I1201 20:14:51.230145   52634 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key
	I1201 20:14:51.230172   52634 certs.go:257] generating profile certs ...
	I1201 20:14:51.230242   52634 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.key
	I1201 20:14:51.230261   52634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt with IP's: []
	I1201 20:14:51.341598   52634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt ...
	I1201 20:14:51.341629   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: {Name:mk1340933d479a2dcb2255abd41dc9882eb49d76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.341859   52634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.key ...
	I1201 20:14:51.341877   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.key: {Name:mkb5e72caa99e3637774a4d5cadafd1d55322ec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.342000   52634 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key.704906e1
	I1201 20:14:51.342028   52634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt.704906e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.174]
	I1201 20:14:51.385601   52634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt.704906e1 ...
	I1201 20:14:51.385628   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt.704906e1: {Name:mk6c64c21f2aef05c60c69fcce4b5db8312d9add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.385846   52634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key.704906e1 ...
	I1201 20:14:51.385864   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key.704906e1: {Name:mkc4bbd3995970395f2f457af1635fe000662e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.385986   52634 certs.go:382] copying /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt.704906e1 -> /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt
	I1201 20:14:51.386093   52634 certs.go:386] copying /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key.704906e1 -> /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key
	I1201 20:14:51.386183   52634 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.key
	I1201 20:14:51.386205   52634 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.crt with IP's: []
	I1201 20:14:51.506418   52634 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.crt ...
	I1201 20:14:51.506444   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.crt: {Name:mka309d8bdf4e1d7e03db0e4bfd44fc7378416fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.506612   52634 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.key ...
	I1201 20:14:51.506624   52634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.key: {Name:mkd9e00b9243d5ae537f07e285172b0875022977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:51.506787   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem (1338 bytes)
	W1201 20:14:51.506838   52634 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868_empty.pem, impossibly tiny 0 bytes
	I1201 20:14:51.506849   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:14:51.506876   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem (1078 bytes)
	I1201 20:14:51.506900   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:14:51.506922   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem (1675 bytes)
	I1201 20:14:51.506961   52634 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:51.507581   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:14:51.540569   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:14:51.570045   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:14:51.598996   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:14:51.628740   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1201 20:14:51.660569   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1201 20:14:51.697177   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:14:51.732493   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:14:51.766763   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /usr/share/ca-certificates/168682.pem (1708 bytes)
	I1201 20:14:51.800429   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:14:51.835992   52634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem --> /usr/share/ca-certificates/16868.pem (1338 bytes)
	I1201 20:14:51.866224   52634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:14:51.886506   52634 ssh_runner.go:195] Run: openssl version
	I1201 20:14:51.893423   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:14:51.908294   52634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:14:51.914065   52634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:14:51.914148   52634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:14:51.921357   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:14:51.938153   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16868.pem && ln -fs /usr/share/ca-certificates/16868.pem /etc/ssl/certs/16868.pem"
	I1201 20:14:51.954378   52634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16868.pem
	I1201 20:14:51.959773   52634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:16 /usr/share/ca-certificates/16868.pem
	I1201 20:14:51.959854   52634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16868.pem
	I1201 20:14:51.969013   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16868.pem /etc/ssl/certs/51391683.0"
	I1201 20:14:51.981761   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168682.pem && ln -fs /usr/share/ca-certificates/168682.pem /etc/ssl/certs/168682.pem"
	I1201 20:14:51.994587   52634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168682.pem
	I1201 20:14:51.999807   52634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:16 /usr/share/ca-certificates/168682.pem
	I1201 20:14:51.999902   52634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168682.pem
	I1201 20:14:52.007500   52634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168682.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:14:52.020509   52634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:14:52.025570   52634 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1201 20:14:52.025631   52634 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-240409 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.2 ClusterName:default-k8s-diff-port-240409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.174 Port:8444 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:14:52.025726   52634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:14:52.025924   52634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:14:52.063721   52634 cri.go:89] found id: ""
	I1201 20:14:52.063812   52634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1201 20:14:52.079313   52634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1201 20:14:52.092357   52634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1201 20:14:52.104730   52634 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1201 20:14:52.104751   52634 kubeadm.go:158] found existing configuration files:
	
	I1201 20:14:52.104800   52634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1201 20:14:52.116234   52634 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1201 20:14:52.116304   52634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1201 20:14:52.128749   52634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1201 20:14:52.140246   52634 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1201 20:14:52.140316   52634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1201 20:14:52.153560   52634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1201 20:14:52.165005   52634 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1201 20:14:52.165078   52634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1201 20:14:52.176925   52634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1201 20:14:52.188508   52634 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1201 20:14:52.188575   52634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1201 20:14:52.204974   52634 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1201 20:14:52.276619   52634 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1201 20:14:52.276757   52634 kubeadm.go:319] [preflight] Running pre-flight checks
	I1201 20:14:52.386811   52634 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1201 20:14:52.386984   52634 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1201 20:14:52.387145   52634 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1201 20:14:52.398245   52634 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1201 20:14:52.736723   52992 start.go:364] duration metric: took 5.494244255s to acquireMachinesLock for "no-preload-931553"
	I1201 20:14:52.736800   52992 start.go:93] Provisioning new machine with config: &{Name:no-preload-931553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-931553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1201 20:14:52.736959   52992 start.go:125] createHost starting for "" (driver="kvm2")
	I1201 20:14:52.488148   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1201 20:14:52.488175   52760 machine.go:97] duration metric: took 6.37809979s to provisionDockerMachine
	I1201 20:14:52.488187   52760 start.go:293] postStartSetup for "pause-092823" (driver="kvm2")
	I1201 20:14:52.488195   52760 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1201 20:14:52.488261   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1201 20:14:52.491556   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.492162   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.492208   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.492403   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:52.576626   52760 ssh_runner.go:195] Run: cat /etc/os-release
	I1201 20:14:52.581811   52760 info.go:137] Remote host: Buildroot 2025.02.8
	I1201 20:14:52.581846   52760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/addons for local assets ...
	I1201 20:14:52.581909   52760 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-12903/.minikube/files for local assets ...
	I1201 20:14:52.582032   52760 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem -> 168682.pem in /etc/ssl/certs
	I1201 20:14:52.582295   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1201 20:14:52.594982   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:52.625436   52760 start.go:296] duration metric: took 137.237635ms for postStartSetup
	I1201 20:14:52.625470   52760 fix.go:56] duration metric: took 6.523253983s for fixHost
	I1201 20:14:52.628161   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.628602   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.628625   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.628891   52760 main.go:143] libmachine: Using SSH client type: native
	I1201 20:14:52.629082   52760 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.165 22 <nil> <nil>}
	I1201 20:14:52.629092   52760 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1201 20:14:52.736550   52760 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764620092.731694487
	
	I1201 20:14:52.736575   52760 fix.go:216] guest clock: 1764620092.731694487
	I1201 20:14:52.736585   52760 fix.go:229] Guest: 2025-12-01 20:14:52.731694487 +0000 UTC Remote: 2025-12-01 20:14:52.625474204 +0000 UTC m=+18.993217111 (delta=106.220283ms)
	I1201 20:14:52.736604   52760 fix.go:200] guest clock delta is within tolerance: 106.220283ms
	I1201 20:14:52.736610   52760 start.go:83] releasing machines lock for "pause-092823", held for 6.634421762s
	I1201 20:14:52.740375   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.740960   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.740994   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.741631   52760 ssh_runner.go:195] Run: cat /version.json
	I1201 20:14:52.741783   52760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1201 20:14:52.745853   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.745875   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.746286   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.746314   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.746289   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:52.746401   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:52.746529   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:52.746712   52760 sshutil.go:53] new ssh client: &{IP:192.168.83.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/pause-092823/id_rsa Username:docker}
	I1201 20:14:52.824741   52760 ssh_runner.go:195] Run: systemctl --version
	I1201 20:14:52.864010   52760 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1201 20:14:53.029292   52760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1201 20:14:53.039060   52760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1201 20:14:53.039157   52760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1201 20:14:53.051696   52760 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1201 20:14:53.051726   52760 start.go:496] detecting cgroup driver to use...
	I1201 20:14:53.051817   52760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1201 20:14:53.084736   52760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1201 20:14:53.105317   52760 docker.go:218] disabling cri-docker service (if available) ...
	I1201 20:14:53.105382   52760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1201 20:14:53.127096   52760 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1201 20:14:53.146720   52760 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1201 20:14:53.342461   52760 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1201 20:14:53.535204   52760 docker.go:234] disabling docker service ...
	I1201 20:14:53.535269   52760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1201 20:14:53.570923   52760 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1201 20:14:53.590115   52760 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	W1201 20:14:51.654622   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	W1201 20:14:53.657064   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	I1201 20:14:52.399821   52634 out.go:252]   - Generating certificates and keys ...
	I1201 20:14:52.399920   52634 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1201 20:14:52.399989   52634 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1201 20:14:53.722898   52634 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1201 20:14:53.921250   52634 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1201 20:14:54.350121   52634 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1201 20:14:54.517731   52634 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1201 20:14:54.912311   52634 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1201 20:14:54.912720   52634 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-240409 localhost] and IPs [192.168.61.174 127.0.0.1 ::1]
	I1201 20:14:55.264355   52634 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1201 20:14:55.264627   52634 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-240409 localhost] and IPs [192.168.61.174 127.0.0.1 ::1]
	I1201 20:14:55.510470   52634 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1201 20:14:55.542577   52634 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1201 20:14:55.737314   52634 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1201 20:14:55.737464   52634 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1201 20:14:55.819568   52634 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1201 20:14:55.950058   52634 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1201 20:14:56.413524   52634 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1201 20:14:56.882736   52634 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1201 20:14:57.096140   52634 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1201 20:14:57.096762   52634 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1201 20:14:57.101975   52634 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1201 20:14:52.739150   52992 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1201 20:14:52.739405   52992 start.go:159] libmachine.API.Create for "no-preload-931553" (driver="kvm2")
	I1201 20:14:52.739448   52992 client.go:173] LocalClient.Create starting
	I1201 20:14:52.739555   52992 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem
	I1201 20:14:52.739605   52992 main.go:143] libmachine: Decoding PEM data...
	I1201 20:14:52.739632   52992 main.go:143] libmachine: Parsing certificate...
	I1201 20:14:52.739722   52992 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem
	I1201 20:14:52.739756   52992 main.go:143] libmachine: Decoding PEM data...
	I1201 20:14:52.739774   52992 main.go:143] libmachine: Parsing certificate...
	I1201 20:14:52.740315   52992 main.go:143] libmachine: creating domain...
	I1201 20:14:52.740336   52992 main.go:143] libmachine: creating network...
	I1201 20:14:52.742116   52992 main.go:143] libmachine: found existing default network
	I1201 20:14:52.742565   52992 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1201 20:14:52.744501   52992 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:b8:b4} reservation:<nil>}
	I1201 20:14:52.745293   52992 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:49:0b:51} reservation:<nil>}
	I1201 20:14:52.746341   52992 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:ff:87} reservation:<nil>}
	I1201 20:14:52.747608   52992 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebe120}
	I1201 20:14:52.747716   52992 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-no-preload-931553</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1201 20:14:52.753708   52992 main.go:143] libmachine: creating private network mk-no-preload-931553 192.168.72.0/24...
	I1201 20:14:52.845143   52992 main.go:143] libmachine: private network mk-no-preload-931553 192.168.72.0/24 created
	I1201 20:14:52.845512   52992 main.go:143] libmachine: <network>
	  <name>mk-no-preload-931553</name>
	  <uuid>258ac5e3-cb60-4228-9bb3-eded68491e19</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:43:9e:bd'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1201 20:14:52.845558   52992 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553 ...
	I1201 20:14:52.845590   52992 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21997-12903/.minikube/cache/iso/amd64/minikube-v1.37.0-1764600683-21997-amd64.iso
	I1201 20:14:52.845602   52992 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 20:14:52.845675   52992 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21997-12903/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21997-12903/.minikube/cache/iso/amd64/minikube-v1.37.0-1764600683-21997-amd64.iso...
	I1201 20:14:53.080568   52992 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/id_rsa...
	I1201 20:14:53.129635   52992 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/no-preload-931553.rawdisk...
	I1201 20:14:53.129677   52992 main.go:143] libmachine: Writing magic tar header
	I1201 20:14:53.129705   52992 main.go:143] libmachine: Writing SSH key tar header
	I1201 20:14:53.129844   52992 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553 ...
	I1201 20:14:53.129943   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553
	I1201 20:14:53.129972   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553 (perms=drwx------)
	I1201 20:14:53.129988   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube/machines
	I1201 20:14:53.130007   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube/machines (perms=drwxr-xr-x)
	I1201 20:14:53.130028   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 20:14:53.130045   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903/.minikube (perms=drwxr-xr-x)
	I1201 20:14:53.130058   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-12903
	I1201 20:14:53.130071   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-12903 (perms=drwxrwxr-x)
	I1201 20:14:53.130079   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1201 20:14:53.130090   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1201 20:14:53.130105   52992 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1201 20:14:53.130122   52992 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1201 20:14:53.130136   52992 main.go:143] libmachine: checking permissions on dir: /home
	I1201 20:14:53.130149   52992 main.go:143] libmachine: skipping /home - not owner
	I1201 20:14:53.130157   52992 main.go:143] libmachine: defining domain...
	I1201 20:14:53.131637   52992 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>no-preload-931553</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/no-preload-931553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-no-preload-931553'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1201 20:14:53.137460   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:62:05:cc in network default
	I1201 20:14:53.138149   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:53.138173   52992 main.go:143] libmachine: starting domain...
	I1201 20:14:53.138180   52992 main.go:143] libmachine: ensuring networks are active...
	I1201 20:14:53.139128   52992 main.go:143] libmachine: Ensuring network default is active
	I1201 20:14:53.139621   52992 main.go:143] libmachine: Ensuring network mk-no-preload-931553 is active
	I1201 20:14:53.140569   52992 main.go:143] libmachine: getting domain XML...
	I1201 20:14:53.141900   52992 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>no-preload-931553</name>
	  <uuid>08df0a42-3710-4a20-9b3d-ff0dc04b7fcc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21997-12903/.minikube/machines/no-preload-931553/no-preload-931553.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ac:7f:f0'/>
	      <source network='mk-no-preload-931553'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:62:05:cc'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1201 20:14:55.109305   52992 main.go:143] libmachine: waiting for domain to start...
	I1201 20:14:55.110955   52992 main.go:143] libmachine: domain is now running
	I1201 20:14:55.110978   52992 main.go:143] libmachine: waiting for IP...
	I1201 20:14:55.111861   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:55.112595   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:55.112612   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:55.113055   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:55.113099   52992 retry.go:31] will retry after 197.950965ms: waiting for domain to come up
	I1201 20:14:55.312483   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:55.313263   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:55.313279   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:55.313685   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:55.313723   52992 retry.go:31] will retry after 277.642131ms: waiting for domain to come up
	I1201 20:14:55.593129   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:55.593759   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:55.593773   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:55.594138   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:55.594170   52992 retry.go:31] will retry after 352.723475ms: waiting for domain to come up
	I1201 20:14:55.949067   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:55.949849   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:55.949871   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:55.950467   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:55.950524   52992 retry.go:31] will retry after 559.448705ms: waiting for domain to come up
	I1201 20:14:56.511505   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:56.512360   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:56.512381   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:56.512896   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:56.512952   52992 retry.go:31] will retry after 668.010634ms: waiting for domain to come up
	I1201 20:14:57.183446   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:53.786626   52760 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1201 20:14:53.946106   52760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1201 20:14:53.963354   52760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1201 20:14:53.989355   52760 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1201 20:14:53.989416   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.002464   52760 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1201 20:14:54.002540   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.016206   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.029704   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.043723   52760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1201 20:14:54.057882   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.070957   52760 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.087846   52760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1201 20:14:54.101197   52760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1201 20:14:54.112550   52760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1201 20:14:54.124663   52760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:54.311031   52760 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1201 20:14:59.173899   52760 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.862827849s)
	I1201 20:14:59.173940   52760 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1201 20:14:59.174012   52760 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1201 20:14:59.179694   52760 start.go:564] Will wait 60s for crictl version
	I1201 20:14:59.179756   52760 ssh_runner.go:195] Run: which crictl
	I1201 20:14:59.184414   52760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1201 20:14:59.230118   52760 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1201 20:14:59.230230   52760 ssh_runner.go:195] Run: crio --version
	I1201 20:14:59.263521   52760 ssh_runner.go:195] Run: crio --version
	I1201 20:14:59.300977   52760 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	W1201 20:14:56.154440   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	W1201 20:14:58.155139   51918 pod_ready.go:104] pod "coredns-66bc5c9577-rr2vl" is not "Ready", error: <nil>
	I1201 20:14:59.656592   51918 pod_ready.go:94] pod "coredns-66bc5c9577-rr2vl" is "Ready"
	I1201 20:14:59.656624   51918 pod_ready.go:86] duration metric: took 41.008621411s for pod "coredns-66bc5c9577-rr2vl" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.656636   51918 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xx6hx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.661405   51918 pod_ready.go:99] pod "coredns-66bc5c9577-xx6hx" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-xx6hx" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-xx6hx" not found
	I1201 20:14:59.661433   51918 pod_ready.go:86] duration metric: took 4.788543ms for pod "coredns-66bc5c9577-xx6hx" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.664999   51918 pod_ready.go:83] waiting for pod "etcd-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.670850   51918 pod_ready.go:94] pod "etcd-embed-certs-200621" is "Ready"
	I1201 20:14:59.670880   51918 pod_ready.go:86] duration metric: took 5.847545ms for pod "etcd-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.673898   51918 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.680335   51918 pod_ready.go:94] pod "kube-apiserver-embed-certs-200621" is "Ready"
	I1201 20:14:59.680362   51918 pod_ready.go:86] duration metric: took 6.440324ms for pod "kube-apiserver-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:59.683852   51918 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:14:57.104672   52634 out.go:252]   - Booting up control plane ...
	I1201 20:14:57.104781   52634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1201 20:14:57.104885   52634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1201 20:14:57.104984   52634 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1201 20:14:57.122694   52634 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1201 20:14:57.122856   52634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1201 20:14:57.132993   52634 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1201 20:14:57.133269   52634 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1201 20:14:57.133340   52634 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1201 20:14:57.311524   52634 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1201 20:14:57.311684   52634 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1201 20:14:58.313692   52634 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001883529s
	I1201 20:14:58.316806   52634 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1201 20:14:58.316966   52634 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.174:8444/livez
	I1201 20:14:58.317099   52634 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1201 20:14:58.317696   52634 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1201 20:15:00.053802   51918 pod_ready.go:94] pod "kube-controller-manager-embed-certs-200621" is "Ready"
	I1201 20:15:00.053850   51918 pod_ready.go:86] duration metric: took 369.975229ms for pod "kube-controller-manager-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:00.254155   51918 pod_ready.go:83] waiting for pod "kube-proxy-n6llm" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:00.653581   51918 pod_ready.go:94] pod "kube-proxy-n6llm" is "Ready"
	I1201 20:15:00.653610   51918 pod_ready.go:86] duration metric: took 399.418914ms for pod "kube-proxy-n6llm" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:00.854535   51918 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:01.252999   51918 pod_ready.go:94] pod "kube-scheduler-embed-certs-200621" is "Ready"
	I1201 20:15:01.253032   51918 pod_ready.go:86] duration metric: took 398.463153ms for pod "kube-scheduler-embed-certs-200621" in "kube-system" namespace to be "Ready" or be gone ...
	I1201 20:15:01.253047   51918 pod_ready.go:40] duration metric: took 42.615846328s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1201 20:15:01.324407   51918 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1201 20:15:01.326276   51918 out.go:179] * Done! kubectl is now configured to use "embed-certs-200621" cluster and "default" namespace by default
	I1201 20:14:57.184380   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:57.184403   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:57.185015   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:57.185100   52992 retry.go:31] will retry after 857.008522ms: waiting for domain to come up
	I1201 20:14:58.044515   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:58.045181   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:58.045200   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:58.045608   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:58.045645   52992 retry.go:31] will retry after 905.95419ms: waiting for domain to come up
	I1201 20:14:58.953709   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:14:58.954618   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:14:58.954639   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:14:58.955146   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:14:58.955208   52992 retry.go:31] will retry after 1.408670452s: waiting for domain to come up
	I1201 20:15:00.365669   52992 main.go:143] libmachine: domain no-preload-931553 has defined MAC address 52:54:00:ac:7f:f0 in network mk-no-preload-931553
	I1201 20:15:00.366611   52992 main.go:143] libmachine: no network interface addresses found for domain no-preload-931553 (source=lease)
	I1201 20:15:00.366634   52992 main.go:143] libmachine: trying to list again with source=arp
	I1201 20:15:00.367163   52992 main.go:143] libmachine: unable to find current IP address of domain no-preload-931553 in network mk-no-preload-931553 (interfaces detected: [])
	I1201 20:15:00.367202   52992 retry.go:31] will retry after 1.833049132s: waiting for domain to come up
	I1201 20:14:59.306339   52760 main.go:143] libmachine: domain pause-092823 has defined MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:59.306927   52760 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:79:ac", ip: ""} in network mk-pause-092823: {Iface:virbr5 ExpiryTime:2025-12-01 21:13:28 +0000 UTC Type:0 Mac:52:54:00:d2:79:ac Iaid: IPaddr:192.168.83.165 Prefix:24 Hostname:pause-092823 Clientid:01:52:54:00:d2:79:ac}
	I1201 20:14:59.306959   52760 main.go:143] libmachine: domain pause-092823 has defined IP address 192.168.83.165 and MAC address 52:54:00:d2:79:ac in network mk-pause-092823
	I1201 20:14:59.307206   52760 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1201 20:14:59.313485   52760 kubeadm.go:884] updating cluster {Name:pause-092823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.165 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1201 20:14:59.313668   52760 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 20:14:59.313743   52760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:59.361359   52760 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:14:59.361384   52760 crio.go:433] Images already preloaded, skipping extraction
	I1201 20:14:59.361439   52760 ssh_runner.go:195] Run: sudo crictl images --output json
	I1201 20:14:59.400162   52760 crio.go:514] all images are preloaded for cri-o runtime.
	I1201 20:14:59.400196   52760 cache_images.go:86] Images are preloaded, skipping loading
	I1201 20:14:59.400206   52760 kubeadm.go:935] updating node { 192.168.83.165 8443 v1.34.2 crio true true} ...
	I1201 20:14:59.400427   52760 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-092823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1201 20:14:59.400636   52760 ssh_runner.go:195] Run: crio config
	I1201 20:14:59.465906   52760 cni.go:84] Creating CNI manager for ""
	I1201 20:14:59.465932   52760 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 20:14:59.465953   52760 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1201 20:14:59.465980   52760 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.165 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-092823 NodeName:pause-092823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1201 20:14:59.466141   52760 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-092823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.165"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.165"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1201 20:14:59.466219   52760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1201 20:14:59.483088   52760 binaries.go:51] Found k8s binaries, skipping transfer
	I1201 20:14:59.483155   52760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1201 20:14:59.498563   52760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1201 20:14:59.524693   52760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1201 20:14:59.553688   52760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1201 20:14:59.581709   52760 ssh_runner.go:195] Run: grep 192.168.83.165	control-plane.minikube.internal$ /etc/hosts
	I1201 20:14:59.586499   52760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1201 20:14:59.815049   52760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1201 20:14:59.839071   52760 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823 for IP: 192.168.83.165
	I1201 20:14:59.839094   52760 certs.go:195] generating shared ca certs ...
	I1201 20:14:59.839113   52760 certs.go:227] acquiring lock for ca certs: {Name:mk7e1ff47c53decb016970932c61ce60ac92f0fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1201 20:14:59.839291   52760 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key
	I1201 20:14:59.839352   52760 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key
	I1201 20:14:59.839363   52760 certs.go:257] generating profile certs ...
	I1201 20:14:59.839525   52760 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/client.key
	I1201 20:14:59.839599   52760 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/apiserver.key.467a48e8
	I1201 20:14:59.839653   52760 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/proxy-client.key
	I1201 20:14:59.839841   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem (1338 bytes)
	W1201 20:14:59.839889   52760 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868_empty.pem, impossibly tiny 0 bytes
	I1201 20:14:59.839907   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca-key.pem (1679 bytes)
	I1201 20:14:59.839940   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/ca.pem (1078 bytes)
	I1201 20:14:59.839972   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/cert.pem (1123 bytes)
	I1201 20:14:59.840008   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/certs/key.pem (1675 bytes)
	I1201 20:14:59.840079   52760 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem (1708 bytes)
	I1201 20:14:59.840980   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1201 20:14:59.877975   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1201 20:14:59.920143   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1201 20:14:59.958913   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1201 20:14:59.998579   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1201 20:15:00.040718   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1201 20:15:00.079238   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1201 20:15:00.115945   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/pause-092823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1201 20:15:00.156664   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/certs/16868.pem --> /usr/share/ca-certificates/16868.pem (1338 bytes)
	I1201 20:15:00.254482   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/ssl/certs/168682.pem --> /usr/share/ca-certificates/168682.pem (1708 bytes)
	I1201 20:15:00.314684   52760 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1201 20:15:00.423437   52760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1201 20:15:00.526481   52760 ssh_runner.go:195] Run: openssl version
	I1201 20:15:00.562753   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16868.pem && ln -fs /usr/share/ca-certificates/16868.pem /etc/ssl/certs/16868.pem"
	I1201 20:15:00.613901   52760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16868.pem
	I1201 20:15:00.633573   52760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  1 19:16 /usr/share/ca-certificates/16868.pem
	I1201 20:15:00.633642   52760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16868.pem
	I1201 20:15:00.654970   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16868.pem /etc/ssl/certs/51391683.0"
	I1201 20:15:00.678714   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/168682.pem && ln -fs /usr/share/ca-certificates/168682.pem /etc/ssl/certs/168682.pem"
	I1201 20:15:00.707854   52760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/168682.pem
	I1201 20:15:00.718201   52760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  1 19:16 /usr/share/ca-certificates/168682.pem
	I1201 20:15:00.718272   52760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/168682.pem
	I1201 20:15:00.734107   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/168682.pem /etc/ssl/certs/3ec20f2e.0"
	I1201 20:15:00.765014   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1201 20:15:00.802005   52760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:15:00.814660   52760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  1 19:05 /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:15:00.814737   52760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1201 20:15:00.828225   52760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1201 20:15:00.851528   52760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1201 20:15:00.862966   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1201 20:15:00.879649   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1201 20:15:00.896138   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1201 20:15:00.912901   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1201 20:15:00.931391   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1201 20:15:00.949981   52760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1201 20:15:00.974116   52760 kubeadm.go:401] StartCluster: {Name:pause-092823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-092823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.165 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 20:15:00.974237   52760 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1201 20:15:00.974317   52760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1201 20:15:01.099401   52760 cri.go:89] found id: "b06d0a76963ef52a5089e4fd9f7322e4c132cc7cef44a0a08aa8f5cdb5a85721"
	I1201 20:15:01.099424   52760 cri.go:89] found id: "57ec1c4b4eeef1781fa5e1f415b57c209059e745e9ed9ebd9f95ed9977eb2e49"
	I1201 20:15:01.099430   52760 cri.go:89] found id: "c903efca5f1e25a2e14a9d7025e07e8179e959d210c875262c49b8986bddc200"
	I1201 20:15:01.099435   52760 cri.go:89] found id: "cc26867cc3d6b7f6b97323de740dcc9dcc89282ab532326b58cba0dd488bb014"
	I1201 20:15:01.099439   52760 cri.go:89] found id: "163dc1a002a3236d0a9e0a45f1ad098210f847d393a710d6b68d41c70b87fc74"
	I1201 20:15:01.099443   52760 cri.go:89] found id: "2fa21d457805dc989eb2a8ef14f2955ec09a508de2674c5bc92ce8b6542a5051"
	I1201 20:15:01.099449   52760 cri.go:89] found id: "03cfd9a73d71461c21c0c9c0c15a1ee0ccc6a97d33909a53fa38e88ce0b2deae"
	I1201 20:15:01.099454   52760 cri.go:89] found id: ""
	I1201 20:15:01.099504   52760 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-092823 -n pause-092823
helpers_test.go:269: (dbg) Run:  kubectl --context pause-092823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (66.50s)

                                                
                                    

Test pass (377/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 23.49
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 9.97
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.16
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.98
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.86
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.43
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.17
30 TestBinaryMirror 0.64
31 TestOffline 94.82
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 128.91
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 11.53
44 TestAddons/parallel/Registry 18.43
45 TestAddons/parallel/RegistryCreds 0.69
47 TestAddons/parallel/InspektorGadget 11.03
48 TestAddons/parallel/MetricsServer 7.09
50 TestAddons/parallel/CSI 62.36
51 TestAddons/parallel/Headlamp 19.84
52 TestAddons/parallel/CloudSpanner 6.92
53 TestAddons/parallel/LocalPath 61.09
54 TestAddons/parallel/NvidiaDevicePlugin 6.71
55 TestAddons/parallel/Yakd 11.92
57 TestAddons/StoppedEnableDisable 86.21
58 TestCertOptions 47.09
59 TestCertExpiration 261.78
61 TestForceSystemdFlag 57.94
62 TestForceSystemdEnv 49.27
67 TestErrorSpam/setup 39.16
68 TestErrorSpam/start 0.35
69 TestErrorSpam/status 0.67
70 TestErrorSpam/pause 1.5
71 TestErrorSpam/unpause 1.68
72 TestErrorSpam/stop 5
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 76.38
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 29.03
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
84 TestFunctional/serial/CacheCmd/cache/add_local 2.1
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.53
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 35.13
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.27
95 TestFunctional/serial/LogsFileCmd 1.28
96 TestFunctional/serial/InvalidService 4.76
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 18.43
100 TestFunctional/parallel/DryRun 0.22
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.82
106 TestFunctional/parallel/ServiceCmdConnect 9.92
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 42.91
110 TestFunctional/parallel/SSHCmd 0.38
111 TestFunctional/parallel/CpCmd 1.19
112 TestFunctional/parallel/MySQL 24.14
113 TestFunctional/parallel/FileSync 0.19
114 TestFunctional/parallel/CertSync 1.18
118 TestFunctional/parallel/NodeLabels 0.1
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
122 TestFunctional/parallel/License 0.42
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
127 TestFunctional/parallel/ServiceCmd/DeployApp 9.22
128 TestFunctional/parallel/ProfileCmd/profile_list 0.4
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
130 TestFunctional/parallel/MountCmd/any-port 21.18
131 TestFunctional/parallel/Version/short 0.06
132 TestFunctional/parallel/Version/components 0.78
133 TestFunctional/parallel/ServiceCmd/List 0.28
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.26
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
136 TestFunctional/parallel/ServiceCmd/Format 0.27
137 TestFunctional/parallel/ServiceCmd/URL 0.38
147 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
148 TestFunctional/parallel/ImageCommands/ImageListTable 0.18
149 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
150 TestFunctional/parallel/ImageCommands/ImageListYaml 0.18
151 TestFunctional/parallel/ImageCommands/ImageBuild 4.31
152 TestFunctional/parallel/ImageCommands/Setup 1.94
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.81
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.01
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.68
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.04
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.89
160 TestFunctional/parallel/MountCmd/specific-port 1.43
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.41
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 87.2
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 49.6
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.13
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.54
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.07
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.6
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 375.25
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.33
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.34
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.8
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.42
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 14.5
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.21
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.11
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.97
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 10.48
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.17
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 41.85
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.32
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.13
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 24.58
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.2
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.15
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.38
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.41
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.07
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.83
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.29
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.28
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.27
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.3
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.68
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.85
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.09
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.45
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.87
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.66
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.51
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.68
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.92
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.83
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 21.27
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.33
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.34
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 19.08
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.29
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.29
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.23
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.42
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.25
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.28
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.33
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 191.93
262 TestMultiControlPlane/serial/DeployApp 7.09
263 TestMultiControlPlane/serial/PingHostFromPods 1.34
264 TestMultiControlPlane/serial/AddWorkerNode 46.23
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
267 TestMultiControlPlane/serial/CopyFile 10.84
268 TestMultiControlPlane/serial/StopSecondaryNode 80.5
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.53
270 TestMultiControlPlane/serial/RestartSecondaryNode 34.41
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.75
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 350.3
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.41
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
275 TestMultiControlPlane/serial/StopCluster 255.99
276 TestMultiControlPlane/serial/RestartCluster 91.47
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
278 TestMultiControlPlane/serial/AddSecondaryNode 78.88
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.69
284 TestJSONOutput/start/Command 76.23
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.74
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.62
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.84
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.24
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 82.7
316 TestMountStart/serial/StartWithMountFirst 19.76
317 TestMountStart/serial/VerifyMountFirst 0.3
318 TestMountStart/serial/StartWithMountSecond 21.09
319 TestMountStart/serial/VerifyMountSecond 0.31
320 TestMountStart/serial/DeleteFirst 0.68
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.24
323 TestMountStart/serial/RestartStopped 18.63
324 TestMountStart/serial/VerifyMountPostStop 0.31
327 TestMultiNode/serial/FreshStart2Nodes 127.11
328 TestMultiNode/serial/DeployApp2Nodes 5.49
329 TestMultiNode/serial/PingHostFrom2Pods 0.86
330 TestMultiNode/serial/AddNode 45.36
331 TestMultiNode/serial/MultiNodeLabels 0.07
332 TestMultiNode/serial/ProfileList 0.46
333 TestMultiNode/serial/CopyFile 6.13
334 TestMultiNode/serial/StopNode 2.51
335 TestMultiNode/serial/StartAfterStop 41.18
336 TestMultiNode/serial/RestartKeepsNodes 289.86
337 TestMultiNode/serial/DeleteNode 2.52
338 TestMultiNode/serial/StopMultiNode 173.9
339 TestMultiNode/serial/RestartMultiNode 82.91
340 TestMultiNode/serial/ValidateNameConflict 38.79
347 TestScheduledStopUnix 107.75
351 TestRunningBinaryUpgrade 391.65
353 TestKubernetesUpgrade 102.91
364 TestStartStop/group/old-k8s-version/serial/FirstStart 81.11
365 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
366 TestNoKubernetes/serial/StartWithK8s 77.32
367 TestNoKubernetes/serial/StartWithStopK8s 5.89
368 TestStartStop/group/old-k8s-version/serial/DeployApp 11.41
369 TestNoKubernetes/serial/Start 23.08
370 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.26
371 TestStartStop/group/old-k8s-version/serial/Stop 85.82
372 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
373 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
374 TestNoKubernetes/serial/ProfileList 6.07
375 TestNoKubernetes/serial/Stop 1.55
376 TestNoKubernetes/serial/StartNoArgs 33.09
380 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
385 TestNetworkPlugins/group/false 3.54
386 TestISOImage/Setup 22.34
391 TestISOImage/Binaries/crictl 0.24
392 TestISOImage/Binaries/curl 0.19
393 TestISOImage/Binaries/docker 0.18
394 TestISOImage/Binaries/git 0.19
395 TestISOImage/Binaries/iptables 0.21
396 TestISOImage/Binaries/podman 0.19
397 TestISOImage/Binaries/rsync 0.21
398 TestISOImage/Binaries/socat 0.2
399 TestISOImage/Binaries/wget 0.18
400 TestISOImage/Binaries/VBoxControl 0.19
401 TestISOImage/Binaries/VBoxService 0.2
402 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
403 TestStartStop/group/old-k8s-version/serial/SecondStart 78.1
404 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.01
405 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
406 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
407 TestStartStop/group/old-k8s-version/serial/Pause 2.79
408 TestStoppedBinaryUpgrade/Setup 3.25
409 TestStoppedBinaryUpgrade/Upgrade 80.66
411 TestPause/serial/Start 93.65
412 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
414 TestStartStop/group/embed-certs/serial/FirstStart 91.48
416 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.64
419 TestStartStop/group/no-preload/serial/FirstStart 76.16
420 TestStartStop/group/embed-certs/serial/DeployApp 12.38
421 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
422 TestStartStop/group/embed-certs/serial/Stop 87.14
424 TestStartStop/group/newest-cni/serial/FirstStart 53.04
425 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
426 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
427 TestStartStop/group/no-preload/serial/DeployApp 10.35
428 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.3
429 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
430 TestStartStop/group/no-preload/serial/Stop 83.73
431 TestStartStop/group/newest-cni/serial/DeployApp 0
432 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.99
433 TestStartStop/group/newest-cni/serial/Stop 7.05
434 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
435 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
436 TestStartStop/group/embed-certs/serial/SecondStart 45.26
437 TestStartStop/group/newest-cni/serial/SecondStart 59.43
438 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
439 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 7.01
440 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.76
441 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
442 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
443 TestStartStop/group/no-preload/serial/SecondStart 63
444 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
445 TestStartStop/group/embed-certs/serial/Pause 3.02
446 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
447 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
448 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
449 TestStartStop/group/newest-cni/serial/Pause 2.91
450 TestNetworkPlugins/group/auto/Start 101.98
451 TestNetworkPlugins/group/kindnet/Start 100.12
452 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
453 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
454 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
455 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.56
456 TestNetworkPlugins/group/calico/Start 81.75
457 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
458 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
459 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
460 TestStartStop/group/no-preload/serial/Pause 3.16
461 TestNetworkPlugins/group/custom-flannel/Start 79.42
462 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
463 TestNetworkPlugins/group/auto/KubeletFlags 0.2
464 TestNetworkPlugins/group/auto/NetCatPod 12.29
465 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
466 TestNetworkPlugins/group/kindnet/NetCatPod 12.31
467 TestNetworkPlugins/group/auto/DNS 0.15
468 TestNetworkPlugins/group/auto/Localhost 0.15
469 TestNetworkPlugins/group/auto/HairPin 0.15
470 TestNetworkPlugins/group/kindnet/DNS 0.23
471 TestNetworkPlugins/group/kindnet/Localhost 0.18
472 TestNetworkPlugins/group/kindnet/HairPin 0.17
473 TestNetworkPlugins/group/enable-default-cni/Start 58.04
474 TestNetworkPlugins/group/calico/ControllerPod 6.01
475 TestNetworkPlugins/group/flannel/Start 87.57
476 TestNetworkPlugins/group/calico/KubeletFlags 0.19
477 TestNetworkPlugins/group/calico/NetCatPod 12.3
478 TestNetworkPlugins/group/calico/DNS 0.19
479 TestNetworkPlugins/group/calico/Localhost 0.15
480 TestNetworkPlugins/group/calico/HairPin 0.18
481 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
482 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.32
483 TestNetworkPlugins/group/custom-flannel/DNS 0.18
484 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
485 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
486 TestNetworkPlugins/group/bridge/Start 85.78
488 TestISOImage/PersistentMounts//data 0.19
489 TestISOImage/PersistentMounts//var/lib/docker 0.17
490 TestISOImage/PersistentMounts//var/lib/cni 0.21
491 TestISOImage/PersistentMounts//var/lib/kubelet 0.2
492 TestISOImage/PersistentMounts//var/lib/minikube 0.2
493 TestISOImage/PersistentMounts//var/lib/toolbox 0.23
494 TestISOImage/PersistentMounts//var/lib/boot2docker 0.22
495 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
496 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
497 TestISOImage/VersionJSON 0.19
498 TestISOImage/eBPFSupport 0.19
499 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
500 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
501 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
502 TestNetworkPlugins/group/flannel/ControllerPod 6.01
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
504 TestNetworkPlugins/group/flannel/NetCatPod 11.24
505 TestNetworkPlugins/group/flannel/DNS 0.15
506 TestNetworkPlugins/group/flannel/Localhost 0.11
507 TestNetworkPlugins/group/flannel/HairPin 0.12
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
509 TestNetworkPlugins/group/bridge/NetCatPod 10.23
510 TestNetworkPlugins/group/bridge/DNS 0.15
511 TestNetworkPlugins/group/bridge/Localhost 0.13
512 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (23.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-158731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-158731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (23.488453063s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (23.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1201 19:05:15.106024   16868 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1201 19:05:15.106113   16868 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-158731
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-158731: exit status 85 (69.735109ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-158731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-158731 │ jenkins │ v1.37.0 │ 01 Dec 25 19:04 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:04:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:04:51.670347   16880 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:04:51.671027   16880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:04:51.671039   16880 out.go:374] Setting ErrFile to fd 2...
	I1201 19:04:51.671045   16880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:04:51.671246   16880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	W1201 19:04:51.671380   16880 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-12903/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-12903/.minikube/config/config.json: no such file or directory
	I1201 19:04:51.671881   16880 out.go:368] Setting JSON to true
	I1201 19:04:51.672774   16880 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2835,"bootTime":1764613057,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:04:51.672846   16880 start.go:143] virtualization: kvm guest
	I1201 19:04:51.677442   16880 out.go:99] [download-only-158731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:04:51.677607   16880 notify.go:221] Checking for updates...
	W1201 19:04:51.677597   16880 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball: no such file or directory
	I1201 19:04:51.679131   16880 out.go:171] MINIKUBE_LOCATION=21997
	I1201 19:04:51.680780   16880 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:04:51.682235   16880 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 19:04:51.683474   16880 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:04:51.684711   16880 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1201 19:04:51.687299   16880 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 19:04:51.687547   16880 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:04:52.194145   16880 out.go:99] Using the kvm2 driver based on user configuration
	I1201 19:04:52.194210   16880 start.go:309] selected driver: kvm2
	I1201 19:04:52.194219   16880 start.go:927] validating driver "kvm2" against <nil>
	I1201 19:04:52.194582   16880 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 19:04:52.195251   16880 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1201 19:04:52.195425   16880 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 19:04:52.195458   16880 cni.go:84] Creating CNI manager for ""
	I1201 19:04:52.195581   16880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 19:04:52.195594   16880 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 19:04:52.195662   16880 start.go:353] cluster config:
	{Name:download-only-158731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-158731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:04:52.195900   16880 iso.go:125] acquiring lock: {Name:mk6a50ce57553a723db22dad35f70cd00228e9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 19:04:52.197563   16880 out.go:99] Downloading VM boot image ...
	I1201 19:04:52.197616   16880 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/iso/amd64/minikube-v1.37.0-1764600683-21997-amd64.iso
	I1201 19:05:02.581422   16880 out.go:99] Starting "download-only-158731" primary control-plane node in "download-only-158731" cluster
	I1201 19:05:02.581455   16880 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1201 19:05:02.674234   16880 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1201 19:05:02.674276   16880 cache.go:65] Caching tarball of preloaded images
	I1201 19:05:02.674434   16880 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1201 19:05:02.676431   16880 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1201 19:05:02.676460   16880 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1201 19:05:02.773450   16880 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1201 19:05:02.773612   16880 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-158731 host does not exist
	  To start a cluster, run: "minikube start -p download-only-158731"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-158731
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-773690 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-773690 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.971223605s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1201 19:05:25.453401   16868 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1201 19:05:25.453440   16868 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-773690
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-773690: exit status 85 (73.085574ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-158731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-158731 │ jenkins │ v1.37.0 │ 01 Dec 25 19:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-158731                                                                                                                                                 │ download-only-158731 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ -o=json --download-only -p download-only-773690 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-773690 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:05:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:05:15.534463   17142 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:05:15.534749   17142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:15.534759   17142 out.go:374] Setting ErrFile to fd 2...
	I1201 19:05:15.534763   17142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:15.534946   17142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:05:15.535391   17142 out.go:368] Setting JSON to true
	I1201 19:05:15.536295   17142 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2859,"bootTime":1764613057,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:05:15.536353   17142 start.go:143] virtualization: kvm guest
	I1201 19:05:15.538319   17142 out.go:99] [download-only-773690] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:05:15.538514   17142 notify.go:221] Checking for updates...
	I1201 19:05:15.540195   17142 out.go:171] MINIKUBE_LOCATION=21997
	I1201 19:05:15.541668   17142 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:05:15.543110   17142 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 19:05:15.544468   17142 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:05:15.545760   17142 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1201 19:05:15.548184   17142 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1201 19:05:15.548444   17142 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:05:15.580765   17142 out.go:99] Using the kvm2 driver based on user configuration
	I1201 19:05:15.580809   17142 start.go:309] selected driver: kvm2
	I1201 19:05:15.580837   17142 start.go:927] validating driver "kvm2" against <nil>
	I1201 19:05:15.581160   17142 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1201 19:05:15.581642   17142 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1201 19:05:15.581806   17142 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1201 19:05:15.581860   17142 cni.go:84] Creating CNI manager for ""
	I1201 19:05:15.581912   17142 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1201 19:05:15.581922   17142 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1201 19:05:15.581977   17142 start.go:353] cluster config:
	{Name:download-only-773690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-773690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:05:15.582086   17142 iso.go:125] acquiring lock: {Name:mk6a50ce57553a723db22dad35f70cd00228e9bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1201 19:05:15.583555   17142 out.go:99] Starting "download-only-773690" primary control-plane node in "download-only-773690" cluster
	I1201 19:05:15.583576   17142 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:05:16.034180   17142 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1201 19:05:16.034210   17142 cache.go:65] Caching tarball of preloaded images
	I1201 19:05:16.034384   17142 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1201 19:05:16.036229   17142 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1201 19:05:16.036248   17142 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1201 19:05:16.136697   17142 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1201 19:05:16.136754   17142 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/21997-12903/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-773690 host does not exist
	  To start a cluster, run: "minikube start -p download-only-773690"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-773690
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-433667 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-433667 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (2.98243333s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-433667
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-433667: exit status 85 (858.611998ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-158731 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-158731 │ jenkins │ v1.37.0 │ 01 Dec 25 19:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-158731                                                                                                                                                        │ download-only-158731 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ -o=json --download-only -p download-only-773690 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-773690 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ delete  │ -p download-only-773690                                                                                                                                                        │ download-only-773690 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │ 01 Dec 25 19:05 UTC │
	│ start   │ -o=json --download-only -p download-only-433667 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-433667 │ jenkins │ v1.37.0 │ 01 Dec 25 19:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/01 19:05:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1201 19:05:25.876783   17336 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:05:25.876932   17336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:25.876942   17336 out.go:374] Setting ErrFile to fd 2...
	I1201 19:05:25.876946   17336 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:05:25.877160   17336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:05:25.877573   17336 out.go:368] Setting JSON to true
	I1201 19:05:25.878883   17336 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2869,"bootTime":1764613057,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:05:25.878932   17336 start.go:143] virtualization: kvm guest
	I1201 19:05:25.881073   17336 out.go:99] [download-only-433667] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:05:25.881206   17336 notify.go:221] Checking for updates...
	I1201 19:05:25.882609   17336 out.go:171] MINIKUBE_LOCATION=21997
	I1201 19:05:25.884762   17336 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:05:25.885996   17336 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 19:05:25.887360   17336 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:05:25.888528   17336 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-433667 host does not exist
	  To start a cluster, run: "minikube start -p download-only-433667"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-433667
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1201 19:05:30.827619   16868 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-004263 --alsologtostderr --binary-mirror http://127.0.0.1:36255 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-004263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-004263
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (94.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-265587 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-265587 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m33.884333388s)
helpers_test.go:175: Cleaning up "offline-crio-265587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-265587
--- PASS: TestOffline (94.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-153147
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-153147: exit status 85 (66.060155ms)

                                                
                                                
-- stdout --
	* Profile "addons-153147" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-153147"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-153147
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-153147: exit status 85 (69.214117ms)

                                                
                                                
-- stdout --
	* Profile "addons-153147" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-153147"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (128.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-153147 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-153147 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.907543202s)
--- PASS: TestAddons/Setup (128.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-153147 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-153147 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-153147 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-153147 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b2c7cc93-0f51-443c-a999-402fe4c9076b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b2c7cc93-0f51-443c-a999-402fe4c9076b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004947732s
addons_test.go:694: (dbg) Run:  kubectl --context addons-153147 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-153147 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-153147 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.470009ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-mfkdk" [11619fff-1af5-4b33-8893-bcb6ad33587c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002885631s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-pw4sl" [5755be46-29a3-4a7e-9349-89d5d6200020] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005113659s
addons_test.go:392: (dbg) Run:  kubectl --context addons-153147 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-153147 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-153147 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.356240064s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 ip
2025/12/01 19:08:18 [DEBUG] GET http://192.168.39.9:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.43s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.905565ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-153147
addons_test.go:332: (dbg) Run:  kubectl --context addons-153147 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-mgzfh" [5879a416-deb8-4b76-acf1-da8470bd108d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004168544s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 addons disable inspektor-gadget --alsologtostderr -v=1: (6.019881079s)
--- PASS: TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.09s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 48.302596ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-r5qgp" [776145bf-6b03-48e3-bbd9-1460bb1d5b86] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003879086s
addons_test.go:463: (dbg) Run:  kubectl --context addons-153147 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.09s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1201 19:08:14.788260   16868 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1201 19:08:14.796629   16868 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1201 19:08:14.796652   16868 kapi.go:107] duration metric: took 8.412922ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.422012ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-153147 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-153147 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [467bcbf8-cf4a-48c4-b539-dda6a4028d4d] Pending
helpers_test.go:352: "task-pv-pod" [467bcbf8-cf4a-48c4-b539-dda6a4028d4d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [467bcbf8-cf4a-48c4-b539-dda6a4028d4d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003700992s
addons_test.go:572: (dbg) Run:  kubectl --context addons-153147 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-153147 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-153147 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-153147 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-153147 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-153147 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-153147 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4fac17d6-2dd9-4bdc-a87a-63563ff3e3a4] Pending
helpers_test.go:352: "task-pv-pod-restore" [4fac17d6-2dd9-4bdc-a87a-63563ff3e3a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4fac17d6-2dd9-4bdc-a87a-63563ff3e3a4] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.01633918s
addons_test.go:614: (dbg) Run:  kubectl --context addons-153147 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-153147 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-153147 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.94284033s)
--- PASS: TestAddons/parallel/CSI (62.36s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-153147 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-jz6qg" [1c8199a5-403d-4c36-9c76-4d1e186c879c] Pending
helpers_test.go:352: "headlamp-dfcdc64b-jz6qg" [1c8199a5-403d-4c36-9c76-4d1e186c879c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-jz6qg" [1c8199a5-403d-4c36-9c76-4d1e186c879c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.005895714s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 addons disable headlamp --alsologtostderr -v=1: (5.938017535s)
--- PASS: TestAddons/parallel/Headlamp (19.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-k9z2t" [93cf1e62-30b0-4a39-8bcd-2b461e2585bf] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004575504s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.92s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (61.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-153147 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-153147 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-153147 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [50740df1-5cb9-4426-9171-06bf19efc9fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [50740df1-5cb9-4426-9171-06bf19efc9fb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [50740df1-5cb9-4426-9171-06bf19efc9fb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 10.004782265s
addons_test.go:967: (dbg) Run:  kubectl --context addons-153147 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 ssh "cat /opt/local-path-provisioner/pvc-4148b11a-9b36-46c4-a96c-f1c2e80569aa_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-153147 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-153147 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.198080966s)
--- PASS: TestAddons/parallel/LocalPath (61.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rcdwp" [42b47333-4324-46b0-9473-d92effc8cb10] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004778278s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.71s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xfxwv" [3e753d70-86a7-468d-9e5c-626b2484af64] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003938411s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-153147 addons disable yakd --alsologtostderr -v=1: (5.914754293s)
--- PASS: TestAddons/parallel/Yakd (11.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (86.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-153147
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-153147: (1m26.007538801s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-153147
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-153147
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-153147
--- PASS: TestAddons/StoppedEnableDisable (86.21s)

                                                
                                    
x
+
TestCertOptions (47.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-495506 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1201 20:14:10.180010   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-495506 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (45.761091854s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-495506 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-495506 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-495506 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-495506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-495506
--- PASS: TestCertOptions (47.09s)

                                                
                                    
x
+
TestCertExpiration (261.78s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-769037 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-769037 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (53.460286947s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-769037 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-769037 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (27.429203766s)
helpers_test.go:175: Cleaning up "cert-expiration-769037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-769037
--- PASS: TestCertExpiration (261.78s)

                                                
                                    
x
+
TestForceSystemdFlag (57.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-580579 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1201 20:10:25.636597   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-580579 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (56.71993034s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-580579 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-580579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-580579
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-580579: (1.014661705s)
--- PASS: TestForceSystemdFlag (57.94s)

                                                
                                    
x
+
TestForceSystemdEnv (49.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-260581 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-260581 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.438248498s)
helpers_test.go:175: Cleaning up "force-systemd-env-260581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-260581
--- PASS: TestForceSystemdEnv (49.27s)

                                                
                                    
x
+
TestErrorSpam/setup (39.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-223566 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-223566 --driver=kvm2  --container-runtime=crio
E1201 19:12:41.107595   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:41.114133   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:41.125593   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:41.147070   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:41.188554   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:41.270019   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:41.431680   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:41.753432   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:42.395476   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:43.677113   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:46.239139   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:12:51.361090   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:13:01.602775   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-223566 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-223566 --driver=kvm2  --container-runtime=crio: (39.156629362s)
--- PASS: TestErrorSpam/setup (39.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 status
--- PASS: TestErrorSpam/status (0.67s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 unpause
--- PASS: TestErrorSpam/unpause (1.68s)

                                                
                                    
x
+
TestErrorSpam/stop (5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 stop: (1.924471127s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 stop: (1.381607318s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-223566 --log_dir /tmp/nospam-223566 stop: (1.691123332s)
--- PASS: TestErrorSpam/stop (5.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/test/nested/copy/16868/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-162795 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1201 19:13:22.085002   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:14:03.046977   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-162795 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m16.381689049s)
--- PASS: TestFunctional/serial/StartWithProxy (76.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.03s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1201 19:14:31.608825   16868 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-162795 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-162795 --alsologtostderr -v=8: (29.025617581s)
functional_test.go:678: soft start took 29.026354083s for "functional-162795" cluster.
I1201 19:15:00.634888   16868 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (29.03s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-162795 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-162795 cache add registry.k8s.io/pause:3.1: (1.137487119s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-162795 cache add registry.k8s.io/pause:3.3: (1.222418412s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-162795 cache add registry.k8s.io/pause:latest: (1.187011642s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-162795 /tmp/TestFunctionalserialCacheCmdcacheadd_local1321363271/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 cache add minikube-local-cache-test:functional-162795
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-162795 cache add minikube-local-cache-test:functional-162795: (1.750750191s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 cache delete minikube-local-cache-test:functional-162795
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-162795
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (170.71918ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 kubectl -- --context functional-162795 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-162795 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-162795 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1201 19:15:24.969098   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-162795 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.125367657s)
functional_test.go:776: restart took 35.125459304s for "functional-162795" cluster.
I1201 19:15:43.712043   16868 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (35.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-162795 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-162795 logs: (1.264797749s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 logs --file /tmp/TestFunctionalserialLogsFileCmd163417889/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-162795 logs --file /tmp/TestFunctionalserialLogsFileCmd163417889/001/logs.txt: (1.274502235s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-162795 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-162795
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-162795: exit status 115 (218.743023ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.232:32765 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-162795 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-162795 delete -f testdata/invalidsvc.yaml: (1.325340532s)
--- PASS: TestFunctional/serial/InvalidService (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 config get cpus: exit status 14 (73.442312ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 config get cpus: exit status 14 (66.386686ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-162795 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-162795 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 23257: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-162795 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-162795 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (111.866971ms)

                                                
                                                
-- stdout --
	* [functional-162795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:16:01.942612   22722 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:16:01.942974   22722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:16:01.942990   22722 out.go:374] Setting ErrFile to fd 2...
	I1201 19:16:01.942997   22722 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:16:01.943288   22722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:16:01.943918   22722 out.go:368] Setting JSON to false
	I1201 19:16:01.945154   22722 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3505,"bootTime":1764613057,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:16:01.945232   22722 start.go:143] virtualization: kvm guest
	I1201 19:16:01.947457   22722 out.go:179] * [functional-162795] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:16:01.949517   22722 notify.go:221] Checking for updates...
	I1201 19:16:01.949524   22722 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:16:01.950956   22722 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:16:01.952305   22722 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 19:16:01.953764   22722 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:16:01.954970   22722 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:16:01.956094   22722 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:16:01.957693   22722 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:16:01.958336   22722 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:16:01.988725   22722 out.go:179] * Using the kvm2 driver based on existing profile
	I1201 19:16:01.989888   22722 start.go:309] selected driver: kvm2
	I1201 19:16:01.989906   22722 start.go:927] validating driver "kvm2" against &{Name:functional-162795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-162795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:16:01.990029   22722 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:16:01.992425   22722 out.go:203] 
	W1201 19:16:01.993692   22722 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1201 19:16:01.994846   22722 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-162795 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-162795 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-162795 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (116.647794ms)

                                                
                                                
-- stdout --
	* [functional-162795] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:16:02.163256   22753 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:16:02.163351   22753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:16:02.163360   22753 out.go:374] Setting ErrFile to fd 2...
	I1201 19:16:02.163363   22753 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:16:02.163688   22753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:16:02.164124   22753 out.go:368] Setting JSON to false
	I1201 19:16:02.164963   22753 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3505,"bootTime":1764613057,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:16:02.165019   22753 start.go:143] virtualization: kvm guest
	I1201 19:16:02.169959   22753 out.go:179] * [functional-162795] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1201 19:16:02.171451   22753 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:16:02.171448   22753 notify.go:221] Checking for updates...
	I1201 19:16:02.172666   22753 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:16:02.173962   22753 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 19:16:02.175349   22753 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:16:02.176650   22753 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:16:02.177961   22753 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:16:02.179795   22753 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:16:02.180498   22753 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:16:02.212220   22753 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1201 19:16:02.213541   22753 start.go:309] selected driver: kvm2
	I1201 19:16:02.213560   22753 start.go:927] validating driver "kvm2" against &{Name:functional-162795 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-162795 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.232 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:16:02.213759   22753 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:16:02.216224   22753 out.go:203] 
	W1201 19:16:02.217597   22753 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1201 19:16:02.218998   22753 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-162795 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-162795 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-8hv8m" [87d7ec49-461b-4916-aaa2-1124ddbc75c7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-8hv8m" [87d7ec49-461b-4916-aaa2-1124ddbc75c7] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005877818s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.232:30720
functional_test.go:1680: http://192.168.39.232:30720: success! body:
Request served by hello-node-connect-7d85dfc575-8hv8m

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.232:30720
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.92s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [7e8c0029-ded3-4b76-8a37-2e731b979e36] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006389924s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-162795 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-162795 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-162795 get pvc myclaim -o=json
I1201 19:15:59.761132   16868 retry.go:31] will retry after 2.968821975s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:1365ea44-ce21-4c96-a095-b10cb878935d ResourceVersion:697 Generation:0 CreationTimestamp:2025-12-01 19:15:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a28b10 VolumeMode:0xc001a28b20 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-162795 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-162795 apply -f testdata/storage-provisioner/pod.yaml
I1201 19:16:02.977991   16868 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b81c65f2-49e5-4d7a-929b-5372b2d734b3] Pending
helpers_test.go:352: "sp-pod" [b81c65f2-49e5-4d7a-929b-5372b2d734b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b81c65f2-49e5-4d7a-929b-5372b2d734b3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.014250442s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-162795 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-162795 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-162795 apply -f testdata/storage-provisioner/pod.yaml
I1201 19:16:26.253889   16868 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4b47f27f-8a17-4ae7-8ff1-b55be7cbea22] Pending
helpers_test.go:352: "sp-pod" [4b47f27f-8a17-4ae7-8ff1-b55be7cbea22] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/12/01 19:16:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "sp-pod" [4b47f27f-8a17-4ae7-8ff1-b55be7cbea22] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00397333s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-162795 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh -n functional-162795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 cp functional-162795:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3610842565/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh -n functional-162795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh -n functional-162795 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-162795 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-f6m24" [69db59fa-293f-467d-804e-2a10876f992e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-f6m24" [69db59fa-293f-467d-804e-2a10876f992e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.008720871s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-162795 exec mysql-5bb876957f-f6m24 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-162795 exec mysql-5bb876957f-f6m24 -- mysql -ppassword -e "show databases;": exit status 1 (439.777602ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1201 19:16:14.103145   16868 retry.go:31] will retry after 1.124808706s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-162795 exec mysql-5bb876957f-f6m24 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-162795 exec mysql-5bb876957f-f6m24 -- mysql -ppassword -e "show databases;": exit status 1 (324.88841ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1201 19:16:15.553250   16868 retry.go:31] will retry after 761.153421ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-162795 exec mysql-5bb876957f-f6m24 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.14s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16868/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo cat /etc/test/nested/copy/16868/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16868.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo cat /etc/ssl/certs/16868.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16868.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo cat /usr/share/ca-certificates/16868.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/168682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo cat /etc/ssl/certs/168682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/168682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo cat /usr/share/ca-certificates/168682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-162795 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 ssh "sudo systemctl is-active docker": exit status 1 (161.559885ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 ssh "sudo systemctl is-active containerd": exit status 1 (195.025567ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-162795 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-162795 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-wf88x" [ac13fa7f-028d-401f-b54f-2f73842a76cc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-wf88x" [ac13fa7f-028d-401f-b54f-2f73842a76cc] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.009097456s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "340.039543ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.725774ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "291.612343ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.943892ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdany-port3465627949/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764616552264398410" to /tmp/TestFunctionalparallelMountCmdany-port3465627949/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764616552264398410" to /tmp/TestFunctionalparallelMountCmdany-port3465627949/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764616552264398410" to /tmp/TestFunctionalparallelMountCmdany-port3465627949/001/test-1764616552264398410
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (167.98963ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:15:52.432748   16868 retry.go:31] will retry after 390.697193ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  1 19:15 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  1 19:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  1 19:15 test-1764616552264398410
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh cat /mount-9p/test-1764616552264398410
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-162795 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8e7749e8-4f2a-47cb-92ca-0c7ec4f61931] Pending
helpers_test.go:352: "busybox-mount" [8e7749e8-4f2a-47cb-92ca-0c7ec4f61931] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8e7749e8-4f2a-47cb-92ca-0c7ec4f61931] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8e7749e8-4f2a-47cb-92ca-0c7ec4f61931] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.011724354s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-162795 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdany-port3465627949/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 service list -o json
functional_test.go:1504: Took "263.185339ms" to run "out/minikube-linux-amd64 -p functional-162795 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.232:32369
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.232:32369
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-162795 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-162795
localhost/kicbase/echo-server:functional-162795
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-162795 image ls --format short --alsologtostderr:
I1201 19:16:18.927297   23463 out.go:360] Setting OutFile to fd 1 ...
I1201 19:16:18.927577   23463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:18.927586   23463 out.go:374] Setting ErrFile to fd 2...
I1201 19:16:18.927590   23463 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:18.927753   23463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:16:18.928410   23463 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:18.928499   23463 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:18.930654   23463 ssh_runner.go:195] Run: systemctl --version
I1201 19:16:18.933079   23463 main.go:143] libmachine: domain functional-162795 has defined MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:18.933457   23463 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:ef", ip: ""} in network mk-functional-162795: {Iface:virbr1 ExpiryTime:2025-12-01 20:13:30 +0000 UTC Type:0 Mac:52:54:00:3b:3a:ef Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-162795 Clientid:01:52:54:00:3b:3a:ef}
I1201 19:16:18.933481   23463 main.go:143] libmachine: domain functional-162795 has defined IP address 192.168.39.232 and MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:18.933639   23463 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-162795/id_rsa Username:docker}
I1201 19:16:19.017145   23463 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-162795 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-162795  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-162795  │ 09bf6d39b31cf │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-162795  │ a486e51ebc1e0 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-162795 image ls --format table --alsologtostderr:
I1201 19:16:23.786457   23542 out.go:360] Setting OutFile to fd 1 ...
I1201 19:16:23.786692   23542 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:23.786699   23542 out.go:374] Setting ErrFile to fd 2...
I1201 19:16:23.786703   23542 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:23.786934   23542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:16:23.787447   23542 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:23.787538   23542 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:23.789617   23542 ssh_runner.go:195] Run: systemctl --version
I1201 19:16:23.792306   23542 main.go:143] libmachine: domain functional-162795 has defined MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:23.792783   23542 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:ef", ip: ""} in network mk-functional-162795: {Iface:virbr1 ExpiryTime:2025-12-01 20:13:30 +0000 UTC Type:0 Mac:52:54:00:3b:3a:ef Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-162795 Clientid:01:52:54:00:3b:3a:ef}
I1201 19:16:23.792812   23542 main.go:143] libmachine: domain functional-162795 has defined IP address 192.168.39.232 and MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:23.792998   23542 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-162795/id_rsa Username:docker}
I1201 19:16:23.871916   23542 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-162795 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-162795"],"size":"4944818"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa3
8e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"10a80c2ab4c8f81047ca649a7f4119bc8cad308786ef220f3b96fc829b9590e8","repoDigests":["docker.io/library/f7243dcccd474389828a949c8244b4a83bf2d960dcb6bf82cfff66363f60df2d-tmp@sha256:c25b528a0ae11b16cb4857508defb0cff98c4f790699952cf1db44f0fa47b526"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io
/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a486e51ebc1e09177f327fc8e1dfa1eb4c21fc0568412cfae56655bf03282f4d","repoDigests":["localhost/minikube-local-cache-test@sha256:a96bd4c22c910b30ee5d728b7bf3a41c5d5c24afafb4805ceae1e33ea9ceff0c"],"repoTags":["localhost/minikube-local-cache-test:functional-162795"],"size":"3330"},{"id":"09bf6d39b31cfc2af36ea29689a3b372e81ccd6f76a756991a4430453d6540fc","repoDigests":["localhost/my-image@sha256:83774d0378fc386e488a61a72d8a19f5a81f969fc3a895bf648b70ca08a98ca8"],"repoTags":["localhost/my-image:functional-162795"],"size":"1468600"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02
b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{
"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819
cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547
"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a
4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-162795 image ls --format json --alsologtostderr:
I1201 19:16:23.602629   23531 out.go:360] Setting OutFile to fd 1 ...
I1201 19:16:23.602725   23531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:23.602731   23531 out.go:374] Setting ErrFile to fd 2...
I1201 19:16:23.602735   23531 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:23.602991   23531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:16:23.603562   23531 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:23.603647   23531 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:23.605873   23531 ssh_runner.go:195] Run: systemctl --version
I1201 19:16:23.608190   23531 main.go:143] libmachine: domain functional-162795 has defined MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:23.608631   23531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:ef", ip: ""} in network mk-functional-162795: {Iface:virbr1 ExpiryTime:2025-12-01 20:13:30 +0000 UTC Type:0 Mac:52:54:00:3b:3a:ef Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-162795 Clientid:01:52:54:00:3b:3a:ef}
I1201 19:16:23.608656   23531 main.go:143] libmachine: domain functional-162795 has defined IP address 192.168.39.232 and MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:23.608817   23531 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-162795/id_rsa Username:docker}
I1201 19:16:23.690022   23531 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-162795 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-162795
size: "4944818"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a486e51ebc1e09177f327fc8e1dfa1eb4c21fc0568412cfae56655bf03282f4d
repoDigests:
- localhost/minikube-local-cache-test@sha256:a96bd4c22c910b30ee5d728b7bf3a41c5d5c24afafb4805ceae1e33ea9ceff0c
repoTags:
- localhost/minikube-local-cache-test:functional-162795
size: "3330"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-162795 image ls --format yaml --alsologtostderr:
I1201 19:16:19.112260   23473 out.go:360] Setting OutFile to fd 1 ...
I1201 19:16:19.112512   23473 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:19.112522   23473 out.go:374] Setting ErrFile to fd 2...
I1201 19:16:19.112528   23473 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:19.112743   23473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:16:19.113309   23473 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:19.113425   23473 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:19.115602   23473 ssh_runner.go:195] Run: systemctl --version
I1201 19:16:19.118128   23473 main.go:143] libmachine: domain functional-162795 has defined MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:19.118540   23473 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:ef", ip: ""} in network mk-functional-162795: {Iface:virbr1 ExpiryTime:2025-12-01 20:13:30 +0000 UTC Type:0 Mac:52:54:00:3b:3a:ef Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-162795 Clientid:01:52:54:00:3b:3a:ef}
I1201 19:16:19.118578   23473 main.go:143] libmachine: domain functional-162795 has defined IP address 192.168.39.232 and MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:19.118760   23473 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-162795/id_rsa Username:docker}
I1201 19:16:19.196758   23473 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 ssh pgrep buildkitd: exit status 1 (149.742542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image build -t localhost/my-image:functional-162795 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-162795 image build -t localhost/my-image:functional-162795 testdata/build --alsologtostderr: (3.964983316s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-162795 image build -t localhost/my-image:functional-162795 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 10a80c2ab4c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-162795
--> 09bf6d39b31
Successfully tagged localhost/my-image:functional-162795
09bf6d39b31cfc2af36ea29689a3b372e81ccd6f76a756991a4430453d6540fc
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-162795 image build -t localhost/my-image:functional-162795 testdata/build --alsologtostderr:
I1201 19:16:19.441665   23495 out.go:360] Setting OutFile to fd 1 ...
I1201 19:16:19.441854   23495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:19.441863   23495 out.go:374] Setting ErrFile to fd 2...
I1201 19:16:19.441867   23495 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:16:19.442062   23495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:16:19.442663   23495 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:19.443483   23495 config.go:182] Loaded profile config "functional-162795": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1201 19:16:19.445631   23495 ssh_runner.go:195] Run: systemctl --version
I1201 19:16:19.448077   23495 main.go:143] libmachine: domain functional-162795 has defined MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:19.448424   23495 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:3a:ef", ip: ""} in network mk-functional-162795: {Iface:virbr1 ExpiryTime:2025-12-01 20:13:30 +0000 UTC Type:0 Mac:52:54:00:3b:3a:ef Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:functional-162795 Clientid:01:52:54:00:3b:3a:ef}
I1201 19:16:19.448449   23495 main.go:143] libmachine: domain functional-162795 has defined IP address 192.168.39.232 and MAC address 52:54:00:3b:3a:ef in network mk-functional-162795
I1201 19:16:19.448569   23495 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-162795/id_rsa Username:docker}
I1201 19:16:19.532217   23495 build_images.go:162] Building image from path: /tmp/build.541781875.tar
I1201 19:16:19.532285   23495 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1201 19:16:19.545122   23495 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.541781875.tar
I1201 19:16:19.550155   23495 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.541781875.tar: stat -c "%s %y" /var/lib/minikube/build/build.541781875.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.541781875.tar': No such file or directory
I1201 19:16:19.550190   23495 ssh_runner.go:362] scp /tmp/build.541781875.tar --> /var/lib/minikube/build/build.541781875.tar (3072 bytes)
I1201 19:16:19.582482   23495 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.541781875
I1201 19:16:19.594602   23495 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.541781875 -xf /var/lib/minikube/build/build.541781875.tar
I1201 19:16:19.607146   23495 crio.go:315] Building image: /var/lib/minikube/build/build.541781875
I1201 19:16:19.607229   23495 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-162795 /var/lib/minikube/build/build.541781875 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1201 19:16:23.313703   23495 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-162795 /var/lib/minikube/build/build.541781875 --cgroup-manager=cgroupfs: (3.706447561s)
I1201 19:16:23.313771   23495 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.541781875
I1201 19:16:23.330000   23495 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.541781875.tar
I1201 19:16:23.343935   23495 build_images.go:218] Built localhost/my-image:functional-162795 from /tmp/build.541781875.tar
I1201 19:16:23.343975   23495 build_images.go:134] succeeded building to: functional-162795
I1201 19:16:23.343980   23495 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.913598873s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-162795
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image load --daemon kicbase/echo-server:functional-162795 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-162795 image load --daemon kicbase/echo-server:functional-162795 --alsologtostderr: (2.622535068s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image load --daemon kicbase/echo-server:functional-162795 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-162795
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image load --daemon kicbase/echo-server:functional-162795 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image save kicbase/echo-server:functional-162795 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image rm kicbase/echo-server:functional-162795 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-162795
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 image save --daemon kicbase/echo-server:functional-162795 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-162795
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdspecific-port2920260034/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.938089ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:16:13.696057   16868 retry.go:31] will retry after 311.035396ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdspecific-port2920260034/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 ssh "sudo umount -f /mount-9p": exit status 1 (215.657368ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-162795 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdspecific-port2920260034/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441858630/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441858630/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441858630/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T" /mount1: exit status 1 (237.584133ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:16:15.118171   16868 retry.go:31] will retry after 499.482495ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-162795 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-162795 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441858630/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441858630/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-162795 /tmp/TestFunctionalparallelMountCmdVerifyCleanup441858630/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.41s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-162795
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-162795
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-162795
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-12903/.minikube/files/etc/test/nested/copy/16868/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (87.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-510618 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1201 19:17:41.107281   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-510618 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m27.198292064s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (87.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (49.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1201 19:18:04.587247   16868 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-510618 --alsologtostderr -v=8
E1201 19:18:08.812399   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-510618 --alsologtostderr -v=8: (49.598746505s)
functional_test.go:678: soft start took 49.599093893s for "functional-510618" cluster.
I1201 19:18:54.186348   16868 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (49.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-510618 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 cache add registry.k8s.io/pause:3.1: (1.213963508s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 cache add registry.k8s.io/pause:3.3: (1.191259098s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 cache add registry.k8s.io/pause:latest: (1.134424689s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach22676162/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 cache add minikube-local-cache-test:functional-510618
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 cache add minikube-local-cache-test:functional-510618: (1.768314815s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 cache delete minikube-local-cache-test:functional-510618
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-510618
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (181.723473ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 cache reload: (1.002072651s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 kubectl -- --context functional-510618 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-510618 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (375.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-510618 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1201 19:20:51.294591   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:51.301021   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:51.312394   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:51.333822   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:51.375335   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:51.456859   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:51.618403   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:51.940108   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:52.582158   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:53.863787   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:20:56.426677   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:01.548559   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:11.790717   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:21:32.272854   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:22:13.235510   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:22:41.107087   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:23:35.160458   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-510618 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m15.24920267s)
functional_test.go:776: restart took 6m15.249352965s for "functional-510618" cluster.
I1201 19:25:17.516118   16868 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (375.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-510618 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 logs: (1.324563134s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2092502238/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2092502238/001/logs.txt: (1.333533731s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-510618 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-510618
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-510618: exit status 115 (241.843997ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.200:30736 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-510618 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-510618 delete -f testdata/invalidsvc.yaml: (1.347635071s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 config get cpus: exit status 14 (62.728932ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 config get cpus: exit status 14 (59.047824ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (14.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-510618 --alsologtostderr -v=1]
E1201 19:25:51.295376   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-510618 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 27566: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (14.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-510618 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-510618 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (110.186939ms)

                                                
                                                
-- stdout --
	* [functional-510618] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:25:50.291685   27507 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:25:50.291985   27507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:25:50.291997   27507 out.go:374] Setting ErrFile to fd 2...
	I1201 19:25:50.292002   27507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:25:50.292172   27507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:25:50.292592   27507 out.go:368] Setting JSON to false
	I1201 19:25:50.293653   27507 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4093,"bootTime":1764613057,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:25:50.293703   27507 start.go:143] virtualization: kvm guest
	I1201 19:25:50.295854   27507 out.go:179] * [functional-510618] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 19:25:50.297204   27507 notify.go:221] Checking for updates...
	I1201 19:25:50.297211   27507 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:25:50.299288   27507 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:25:50.300917   27507 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 19:25:50.302240   27507 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:25:50.303413   27507 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:25:50.304706   27507 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:25:50.306220   27507 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 19:25:50.306675   27507 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:25:50.339500   27507 out.go:179] * Using the kvm2 driver based on existing profile
	I1201 19:25:50.340765   27507 start.go:309] selected driver: kvm2
	I1201 19:25:50.340778   27507 start.go:927] validating driver "kvm2" against &{Name:functional-510618 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-510618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:25:50.340897   27507 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:25:50.342927   27507 out.go:203] 
	W1201 19:25:50.344079   27507 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1201 19:25:50.345138   27507 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-510618 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-510618 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-510618 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (114.281156ms)

                                                
                                                
-- stdout --
	* [functional-510618] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:25:50.508887   27538 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:25:50.509145   27538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:25:50.509156   27538 out.go:374] Setting ErrFile to fd 2...
	I1201 19:25:50.509162   27538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:25:50.509436   27538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:25:50.509899   27538 out.go:368] Setting JSON to false
	I1201 19:25:50.510683   27538 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4094,"bootTime":1764613057,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 19:25:50.510739   27538 start.go:143] virtualization: kvm guest
	I1201 19:25:50.512778   27538 out.go:179] * [functional-510618] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1201 19:25:50.514251   27538 notify.go:221] Checking for updates...
	I1201 19:25:50.514264   27538 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 19:25:50.515735   27538 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 19:25:50.517304   27538 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 19:25:50.518727   27538 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 19:25:50.519986   27538 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 19:25:50.521414   27538 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 19:25:50.523298   27538 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1201 19:25:50.523985   27538 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 19:25:50.557419   27538 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1201 19:25:50.558873   27538 start.go:309] selected driver: kvm2
	I1201 19:25:50.558887   27538 start.go:927] validating driver "kvm2" against &{Name:functional-510618 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21997/minikube-v1.37.0-1764600683-21997-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-510618 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1201 19:25:50.558988   27538 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 19:25:50.561118   27538 out.go:203] 
	W1201 19:25:50.562419   27538 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1201 19:25:50.563581   27538 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-510618 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-510618 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-5pk2h" [f9875a5d-f9f3-436f-8c65-78b6d1f2ba68] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-5pk2h" [f9875a5d-f9f3-436f-8c65-78b6d1f2ba68] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003797512s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.200:32616
functional_test.go:1680: http://192.168.39.200:32616: success! body:
Request served by hello-node-connect-9f67c86d4-5pk2h

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.200:32616
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (41.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f9e3b892-aeda-4cf7-b6ef-551f766767b2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004767386s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-510618 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-510618 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-510618 get pvc myclaim -o=json
I1201 19:25:33.399066   16868 retry.go:31] will retry after 1.369642035s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:39a2181d-6427-4739-aa5e-2bb9bd1f393e ResourceVersion:538 Generation:0 CreationTimestamp:2025-12-01 19:25:33 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0018e09c0 VolumeMode:0xc0018e09d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-510618 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-510618 apply -f testdata/storage-provisioner/pod.yaml
I1201 19:25:34.990381   16868 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [39d44166-ff0e-4404-b083-f387f9f3e6dc] Pending
helpers_test.go:352: "sp-pod" [39d44166-ff0e-4404-b083-f387f9f3e6dc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [39d44166-ff0e-4404-b083-f387f9f3e6dc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.004073418s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-510618 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-510618 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-510618 apply -f testdata/storage-provisioner/pod.yaml
I1201 19:25:54.815537   16868 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fa9fd322-9be4-4f62-8b36-3cbd9d2ba5a6] Pending
helpers_test.go:352: "sp-pod" [fa9fd322-9be4-4f62-8b36-3cbd9d2ba5a6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [fa9fd322-9be4-4f62-8b36-3cbd9d2ba5a6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005806502s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-510618 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (41.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh -n functional-510618 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 cp functional-510618:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2720645108/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh -n functional-510618 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh -n functional-510618 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (24.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-510618 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-nmk79" [4ce9b39b-7d53-4ba3-9772-082f8b36cd11] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-nmk79" [4ce9b39b-7d53-4ba3-9772-082f8b36cd11] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 23.003759748s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-510618 exec mysql-844cf969f6-nmk79 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-510618 exec mysql-844cf969f6-nmk79 -- mysql -ppassword -e "show databases;": exit status 1 (125.523269ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1201 19:25:48.969049   16868 retry.go:31] will retry after 1.143692513s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-510618 exec mysql-844cf969f6-nmk79 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (24.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/16868/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo cat /etc/test/nested/copy/16868/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/16868.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo cat /etc/ssl/certs/16868.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/16868.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo cat /usr/share/ca-certificates/16868.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/168682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo cat /etc/ssl/certs/168682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/168682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo cat /usr/share/ca-certificates/168682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-510618 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 ssh "sudo systemctl is-active docker": exit status 1 (201.357527ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 ssh "sudo systemctl is-active containerd": exit status 1 (181.742749ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-510618 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-510618
localhost/kicbase/echo-server:functional-510618
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-510618 image ls --format short --alsologtostderr:
I1201 19:25:59.112661   27902 out.go:360] Setting OutFile to fd 1 ...
I1201 19:25:59.113007   27902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.113023   27902 out.go:374] Setting ErrFile to fd 2...
I1201 19:25:59.113030   27902 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.113335   27902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:25:59.114206   27902 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.114355   27902 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.117056   27902 ssh_runner.go:195] Run: systemctl --version
I1201 19:25:59.119792   27902 main.go:143] libmachine: domain functional-510618 has defined MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.120350   27902 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:7d:11", ip: ""} in network mk-functional-510618: {Iface:virbr1 ExpiryTime:2025-12-01 20:16:52 +0000 UTC Type:0 Mac:52:54:00:60:7d:11 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-510618 Clientid:01:52:54:00:60:7d:11}
I1201 19:25:59.120393   27902 main.go:143] libmachine: domain functional-510618 has defined IP address 192.168.39.200 and MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.120663   27902 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-510618/id_rsa Username:docker}
I1201 19:25:59.241046   27902 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-510618 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ latest            │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc      │ 56cc512116c8f │ 4.63MB │
│ docker.io/kicbase/echo-server           │ latest            │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-510618 │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0    │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.10.1            │ cd073f4c5f6a8 │ 740kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-510618 │ a486e51ebc1e0 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0    │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.3               │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest            │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7               │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/pause                   │ 3.1               │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-510618 image ls --format table --alsologtostderr:
I1201 19:25:59.975121   27971 out.go:360] Setting OutFile to fd 1 ...
I1201 19:25:59.975377   27971 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.975388   27971 out.go:374] Setting ErrFile to fd 2...
I1201 19:25:59.975392   27971 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.975611   27971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:25:59.976246   27971 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.976358   27971 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.978785   27971 ssh_runner.go:195] Run: systemctl --version
I1201 19:25:59.981661   27971 main.go:143] libmachine: domain functional-510618 has defined MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.982141   27971 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:7d:11", ip: ""} in network mk-functional-510618: {Iface:virbr1 ExpiryTime:2025-12-01 20:16:52 +0000 UTC Type:0 Mac:52:54:00:60:7d:11 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-510618 Clientid:01:52:54:00:60:7d:11}
I1201 19:25:59.982169   27971 main.go:143] libmachine: domain functional-510618 has defined IP address 192.168.39.200 and MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.982328   27971 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-510618/id_rsa Username:docker}
I1201 19:26:00.113775   27971 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-510618 image ls --format json --alsologtostderr:
[{"id":"a486e51ebc1e09177f327fc8e1dfa1eb4c21fc0568412cfae56655bf03282f4d","repoDigests":["localhost/minikube-local-cache-test@sha256:a96bd4c22c910b30ee5d728b7bf3a41c5d5c24afafb4805ceae1e33ea9ceff0c"],"repoTags":["localhost/minikube-local-cache-test:functional-510618"],"size":"3330"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90816810"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76869776"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc
193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-510618"],"size":"4943877"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/stora
ge-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31468661"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79190589"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/l
ibrary/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52744336"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:
43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63582165"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6"],"repoTags":["registry.k
8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71976228"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-510618 image ls --format json --alsologtostderr:
I1201 19:25:59.723767   27950 out.go:360] Setting OutFile to fd 1 ...
I1201 19:25:59.723885   27950 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.723899   27950 out.go:374] Setting ErrFile to fd 2...
I1201 19:25:59.723904   27950 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.724134   27950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:25:59.724674   27950 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.724771   27950 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.726912   27950 ssh_runner.go:195] Run: systemctl --version
I1201 19:25:59.729459   27950 main.go:143] libmachine: domain functional-510618 has defined MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.729899   27950 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:7d:11", ip: ""} in network mk-functional-510618: {Iface:virbr1 ExpiryTime:2025-12-01 20:16:52 +0000 UTC Type:0 Mac:52:54:00:60:7d:11 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-510618 Clientid:01:52:54:00:60:7d:11}
I1201 19:25:59.729930   27950 main.go:143] libmachine: domain functional-510618 has defined IP address 192.168.39.200 and MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.730097   27950 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-510618/id_rsa Username:docker}
I1201 19:25:59.835560   27950 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-510618 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63582165"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71976228"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90816810"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76869776"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79190589"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52744336"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-510618
size: "4943877"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: a486e51ebc1e09177f327fc8e1dfa1eb4c21fc0568412cfae56655bf03282f4d
repoDigests:
- localhost/minikube-local-cache-test@sha256:a96bd4c22c910b30ee5d728b7bf3a41c5d5c24afafb4805ceae1e33ea9ceff0c
repoTags:
- localhost/minikube-local-cache-test:functional-510618
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-510618 image ls --format yaml --alsologtostderr:
I1201 19:25:59.404048   27929 out.go:360] Setting OutFile to fd 1 ...
I1201 19:25:59.404315   27929 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.404325   27929 out.go:374] Setting ErrFile to fd 2...
I1201 19:25:59.404329   27929 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.404525   27929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:25:59.405101   27929 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.405198   27929 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.407948   27929 ssh_runner.go:195] Run: systemctl --version
I1201 19:25:59.410694   27929 main.go:143] libmachine: domain functional-510618 has defined MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.411231   27929 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:7d:11", ip: ""} in network mk-functional-510618: {Iface:virbr1 ExpiryTime:2025-12-01 20:16:52 +0000 UTC Type:0 Mac:52:54:00:60:7d:11 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-510618 Clientid:01:52:54:00:60:7d:11}
I1201 19:25:59.411261   27929 main.go:143] libmachine: domain functional-510618 has defined IP address 192.168.39.200 and MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.411443   27929 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-510618/id_rsa Username:docker}
I1201 19:25:59.543415   27929 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 ssh pgrep buildkitd: exit status 1 (189.15279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image build -t localhost/my-image:functional-510618 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 image build -t localhost/my-image:functional-510618 testdata/build --alsologtostderr: (4.303208266s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-510618 image build -t localhost/my-image:functional-510618 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 32a5c58bfb9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-510618
--> f3275fb83c2
Successfully tagged localhost/my-image:functional-510618
f3275fb83c2195bf79decb70ff2e898eb838d3ee2a1de80f1fb9e0b84d90115a
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-510618 image build -t localhost/my-image:functional-510618 testdata/build --alsologtostderr:
I1201 19:25:59.845777   27960 out.go:360] Setting OutFile to fd 1 ...
I1201 19:25:59.845976   27960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.845992   27960 out.go:374] Setting ErrFile to fd 2...
I1201 19:25:59.845998   27960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1201 19:25:59.846332   27960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
I1201 19:25:59.847235   27960 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.847887   27960 config.go:182] Loaded profile config "functional-510618": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1201 19:25:59.850005   27960 ssh_runner.go:195] Run: systemctl --version
I1201 19:25:59.852579   27960 main.go:143] libmachine: domain functional-510618 has defined MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.853093   27960 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:60:7d:11", ip: ""} in network mk-functional-510618: {Iface:virbr1 ExpiryTime:2025-12-01 20:16:52 +0000 UTC Type:0 Mac:52:54:00:60:7d:11 Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:functional-510618 Clientid:01:52:54:00:60:7d:11}
I1201 19:25:59.853121   27960 main.go:143] libmachine: domain functional-510618 has defined IP address 192.168.39.200 and MAC address 52:54:00:60:7d:11 in network mk-functional-510618
I1201 19:25:59.853287   27960 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/functional-510618/id_rsa Username:docker}
I1201 19:25:59.966609   27960 build_images.go:162] Building image from path: /tmp/build.3218902824.tar
I1201 19:25:59.966678   27960 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1201 19:25:59.999396   27960 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3218902824.tar
I1201 19:26:00.011532   27960 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3218902824.tar: stat -c "%s %y" /var/lib/minikube/build/build.3218902824.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3218902824.tar': No such file or directory
I1201 19:26:00.011564   27960 ssh_runner.go:362] scp /tmp/build.3218902824.tar --> /var/lib/minikube/build/build.3218902824.tar (3072 bytes)
I1201 19:26:00.068487   27960 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3218902824
I1201 19:26:00.091881   27960 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3218902824 -xf /var/lib/minikube/build/build.3218902824.tar
I1201 19:26:00.115475   27960 crio.go:315] Building image: /var/lib/minikube/build/build.3218902824
I1201 19:26:00.115534   27960 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-510618 /var/lib/minikube/build/build.3218902824 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1201 19:26:04.051615   27960 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-510618 /var/lib/minikube/build/build.3218902824 --cgroup-manager=cgroupfs: (3.936057346s)
I1201 19:26:04.051708   27960 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3218902824
I1201 19:26:04.066332   27960 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3218902824.tar
I1201 19:26:04.078704   27960 build_images.go:218] Built localhost/my-image:functional-510618 from /tmp/build.3218902824.tar
I1201 19:26:04.078744   27960 build_images.go:134] succeeded building to: functional-510618
I1201 19:26:04.078762   27960 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls
2025/12/01 19:26:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-510618
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image load --daemon kicbase/echo-server:functional-510618 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 image load --daemon kicbase/echo-server:functional-510618 --alsologtostderr: (1.254917834s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image load --daemon kicbase/echo-server:functional-510618 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-510618
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image load --daemon kicbase/echo-server:functional-510618 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image save kicbase/echo-server:functional-510618 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image rm kicbase/echo-server:functional-510618 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-510618
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 image save --daemon kicbase/echo-server:functional-510618 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-510618
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (21.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-510618 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-510618 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-tlmh8" [303d69cd-23e8-46bc-b674-a673adf19170] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-tlmh8" [303d69cd-23e8-46bc-b674-a673adf19170] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.004533567s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (21.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "256.720553ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "69.33661ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "278.362455ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.729215ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (19.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3165908583/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764617137019402914" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3165908583/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764617137019402914" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3165908583/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764617137019402914" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3165908583/001/test-1764617137019402914
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (241.417591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:25:37.261152   16868 retry.go:31] will retry after 376.659648ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  1 19:25 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  1 19:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  1 19:25 test-1764617137019402914
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh cat /mount-9p/test-1764617137019402914
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-510618 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [192f963f-df95-4f33-9ee0-641e401524c7] Pending
helpers_test.go:352: "busybox-mount" [192f963f-df95-4f33-9ee0-641e401524c7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [192f963f-df95-4f33-9ee0-641e401524c7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [192f963f-df95-4f33-9ee0-641e401524c7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.004842143s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-510618 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3165908583/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (19.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 service list: (1.289595846s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo205310559/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (172.145123ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:25:56.276574   16868 retry.go:31] will retry after 404.551777ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo205310559/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 ssh "sudo umount -f /mount-9p": exit status 1 (160.856721ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-510618 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo205310559/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-510618 service list -o json: (1.225328529s)
functional_test.go:1504: Took "1.225412932s" to run "out/minikube-linux-amd64 -p functional-510618 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo502019492/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo502019492/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo502019492/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T" /mount1: exit status 1 (173.38052ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1201 19:25:57.570074   16868 retry.go:31] will retry after 601.814965ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-510618 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo502019492/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo502019492/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-510618 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo502019492/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.200:30125
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-510618 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.200:30125
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-510618
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-510618
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-510618
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (191.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1201 19:26:19.002397   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:27:41.102596   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:29:04.173967   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m11.335602626s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (191.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 kubectl -- rollout status deployment/busybox: (4.755185874s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-54bfm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-69zsv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-vng2c -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-54bfm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-69zsv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-vng2c -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-54bfm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-69zsv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-vng2c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-54bfm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-54bfm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-69zsv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-69zsv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-vng2c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 kubectl -- exec busybox-7b57f96db7-vng2c -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (46.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 node add --alsologtostderr -v 5: (45.553366717s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (46.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-933175 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp testdata/cp-test.txt ha-933175:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3856189398/001/cp-test_ha-933175.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175:/home/docker/cp-test.txt ha-933175-m02:/home/docker/cp-test_ha-933175_ha-933175-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m02 "sudo cat /home/docker/cp-test_ha-933175_ha-933175-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175:/home/docker/cp-test.txt ha-933175-m03:/home/docker/cp-test_ha-933175_ha-933175-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m03 "sudo cat /home/docker/cp-test_ha-933175_ha-933175-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175:/home/docker/cp-test.txt ha-933175-m04:/home/docker/cp-test_ha-933175_ha-933175-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m04 "sudo cat /home/docker/cp-test_ha-933175_ha-933175-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp testdata/cp-test.txt ha-933175-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3856189398/001/cp-test_ha-933175-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m02:/home/docker/cp-test.txt ha-933175:/home/docker/cp-test_ha-933175-m02_ha-933175.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175 "sudo cat /home/docker/cp-test_ha-933175-m02_ha-933175.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m02:/home/docker/cp-test.txt ha-933175-m03:/home/docker/cp-test_ha-933175-m02_ha-933175-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m03 "sudo cat /home/docker/cp-test_ha-933175-m02_ha-933175-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m02:/home/docker/cp-test.txt ha-933175-m04:/home/docker/cp-test_ha-933175-m02_ha-933175-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m04 "sudo cat /home/docker/cp-test_ha-933175-m02_ha-933175-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp testdata/cp-test.txt ha-933175-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3856189398/001/cp-test_ha-933175-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m03:/home/docker/cp-test.txt ha-933175:/home/docker/cp-test_ha-933175-m03_ha-933175.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175 "sudo cat /home/docker/cp-test_ha-933175-m03_ha-933175.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m03:/home/docker/cp-test.txt ha-933175-m02:/home/docker/cp-test_ha-933175-m03_ha-933175-m02.txt
E1201 19:30:25.633357   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:30:25.639919   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:30:25.651356   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:30:25.672845   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m03 "sudo cat /home/docker/cp-test.txt"
E1201 19:30:25.715157   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:30:25.796638   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m02 "sudo cat /home/docker/cp-test_ha-933175-m03_ha-933175-m02.txt"
E1201 19:30:25.958743   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m03:/home/docker/cp-test.txt ha-933175-m04:/home/docker/cp-test_ha-933175-m03_ha-933175-m04.txt
E1201 19:30:26.280187   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m04 "sudo cat /home/docker/cp-test_ha-933175-m03_ha-933175-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp testdata/cp-test.txt ha-933175-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m04 "sudo cat /home/docker/cp-test.txt"
E1201 19:30:26.922220   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3856189398/001/cp-test_ha-933175-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m04:/home/docker/cp-test.txt ha-933175:/home/docker/cp-test_ha-933175-m04_ha-933175.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175 "sudo cat /home/docker/cp-test_ha-933175-m04_ha-933175.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m04:/home/docker/cp-test.txt ha-933175-m02:/home/docker/cp-test_ha-933175-m04_ha-933175-m02.txt
E1201 19:30:28.204597   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m02 "sudo cat /home/docker/cp-test_ha-933175-m04_ha-933175-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 cp ha-933175-m04:/home/docker/cp-test.txt ha-933175-m03:/home/docker/cp-test_ha-933175-m04_ha-933175-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 ssh -n ha-933175-m03 "sudo cat /home/docker/cp-test_ha-933175-m04_ha-933175-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (80.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 node stop m02 --alsologtostderr -v 5
E1201 19:30:30.766639   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:30:35.888799   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:30:46.130985   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:30:51.295020   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:31:06.613016   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:31:47.576176   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 node stop m02 --alsologtostderr -v 5: (1m19.997002664s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5: exit status 7 (506.9194ms)

                                                
                                                
-- stdout --
	ha-933175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-933175-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-933175-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-933175-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:31:49.225074   30930 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:31:49.225354   30930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:31:49.225364   30930 out.go:374] Setting ErrFile to fd 2...
	I1201 19:31:49.225368   30930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:31:49.225591   30930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:31:49.225812   30930 out.go:368] Setting JSON to false
	I1201 19:31:49.225852   30930 mustload.go:66] Loading cluster: ha-933175
	I1201 19:31:49.225957   30930 notify.go:221] Checking for updates...
	I1201 19:31:49.226386   30930 config.go:182] Loaded profile config "ha-933175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:31:49.226408   30930 status.go:174] checking status of ha-933175 ...
	I1201 19:31:49.228786   30930 status.go:371] ha-933175 host status = "Running" (err=<nil>)
	I1201 19:31:49.228802   30930 host.go:66] Checking if "ha-933175" exists ...
	I1201 19:31:49.231760   30930 main.go:143] libmachine: domain ha-933175 has defined MAC address 52:54:00:04:77:50 in network mk-ha-933175
	I1201 19:31:49.232272   30930 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:77:50", ip: ""} in network mk-ha-933175: {Iface:virbr1 ExpiryTime:2025-12-01 20:26:26 +0000 UTC Type:0 Mac:52:54:00:04:77:50 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-933175 Clientid:01:52:54:00:04:77:50}
	I1201 19:31:49.232298   30930 main.go:143] libmachine: domain ha-933175 has defined IP address 192.168.39.41 and MAC address 52:54:00:04:77:50 in network mk-ha-933175
	I1201 19:31:49.232417   30930 host.go:66] Checking if "ha-933175" exists ...
	I1201 19:31:49.232665   30930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:31:49.234938   30930 main.go:143] libmachine: domain ha-933175 has defined MAC address 52:54:00:04:77:50 in network mk-ha-933175
	I1201 19:31:49.235516   30930 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:04:77:50", ip: ""} in network mk-ha-933175: {Iface:virbr1 ExpiryTime:2025-12-01 20:26:26 +0000 UTC Type:0 Mac:52:54:00:04:77:50 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-933175 Clientid:01:52:54:00:04:77:50}
	I1201 19:31:49.235562   30930 main.go:143] libmachine: domain ha-933175 has defined IP address 192.168.39.41 and MAC address 52:54:00:04:77:50 in network mk-ha-933175
	I1201 19:31:49.235713   30930 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/ha-933175/id_rsa Username:docker}
	I1201 19:31:49.322366   30930 ssh_runner.go:195] Run: systemctl --version
	I1201 19:31:49.330463   30930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:31:49.349806   30930 kubeconfig.go:125] found "ha-933175" server: "https://192.168.39.254:8443"
	I1201 19:31:49.349868   30930 api_server.go:166] Checking apiserver status ...
	I1201 19:31:49.349915   30930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 19:31:49.373288   30930 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1371/cgroup
	W1201 19:31:49.385856   30930 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1371/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1201 19:31:49.385913   30930 ssh_runner.go:195] Run: ls
	I1201 19:31:49.393081   30930 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1201 19:31:49.398655   30930 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1201 19:31:49.398688   30930 status.go:463] ha-933175 apiserver status = Running (err=<nil>)
	I1201 19:31:49.398699   30930 status.go:176] ha-933175 status: &{Name:ha-933175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:31:49.398720   30930 status.go:174] checking status of ha-933175-m02 ...
	I1201 19:31:49.400485   30930 status.go:371] ha-933175-m02 host status = "Stopped" (err=<nil>)
	I1201 19:31:49.400507   30930 status.go:384] host is not running, skipping remaining checks
	I1201 19:31:49.400514   30930 status.go:176] ha-933175-m02 status: &{Name:ha-933175-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:31:49.400534   30930 status.go:174] checking status of ha-933175-m03 ...
	I1201 19:31:49.402095   30930 status.go:371] ha-933175-m03 host status = "Running" (err=<nil>)
	I1201 19:31:49.402117   30930 host.go:66] Checking if "ha-933175-m03" exists ...
	I1201 19:31:49.404607   30930 main.go:143] libmachine: domain ha-933175-m03 has defined MAC address 52:54:00:7b:f8:44 in network mk-ha-933175
	I1201 19:31:49.405057   30930 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:44", ip: ""} in network mk-ha-933175: {Iface:virbr1 ExpiryTime:2025-12-01 20:28:18 +0000 UTC Type:0 Mac:52:54:00:7b:f8:44 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-933175-m03 Clientid:01:52:54:00:7b:f8:44}
	I1201 19:31:49.405083   30930 main.go:143] libmachine: domain ha-933175-m03 has defined IP address 192.168.39.203 and MAC address 52:54:00:7b:f8:44 in network mk-ha-933175
	I1201 19:31:49.405205   30930 host.go:66] Checking if "ha-933175-m03" exists ...
	I1201 19:31:49.405410   30930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:31:49.408250   30930 main.go:143] libmachine: domain ha-933175-m03 has defined MAC address 52:54:00:7b:f8:44 in network mk-ha-933175
	I1201 19:31:49.408677   30930 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:f8:44", ip: ""} in network mk-ha-933175: {Iface:virbr1 ExpiryTime:2025-12-01 20:28:18 +0000 UTC Type:0 Mac:52:54:00:7b:f8:44 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:ha-933175-m03 Clientid:01:52:54:00:7b:f8:44}
	I1201 19:31:49.408699   30930 main.go:143] libmachine: domain ha-933175-m03 has defined IP address 192.168.39.203 and MAC address 52:54:00:7b:f8:44 in network mk-ha-933175
	I1201 19:31:49.408905   30930 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/ha-933175-m03/id_rsa Username:docker}
	I1201 19:31:49.496815   30930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:31:49.515845   30930 kubeconfig.go:125] found "ha-933175" server: "https://192.168.39.254:8443"
	I1201 19:31:49.515881   30930 api_server.go:166] Checking apiserver status ...
	I1201 19:31:49.515928   30930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 19:31:49.538505   30930 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1796/cgroup
	W1201 19:31:49.550075   30930 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1796/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1201 19:31:49.550154   30930 ssh_runner.go:195] Run: ls
	I1201 19:31:49.555408   30930 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1201 19:31:49.560351   30930 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1201 19:31:49.560374   30930 status.go:463] ha-933175-m03 apiserver status = Running (err=<nil>)
	I1201 19:31:49.560384   30930 status.go:176] ha-933175-m03 status: &{Name:ha-933175-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:31:49.560403   30930 status.go:174] checking status of ha-933175-m04 ...
	I1201 19:31:49.562005   30930 status.go:371] ha-933175-m04 host status = "Running" (err=<nil>)
	I1201 19:31:49.562026   30930 host.go:66] Checking if "ha-933175-m04" exists ...
	I1201 19:31:49.564598   30930 main.go:143] libmachine: domain ha-933175-m04 has defined MAC address 52:54:00:af:ba:15 in network mk-ha-933175
	I1201 19:31:49.565068   30930 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:ba:15", ip: ""} in network mk-ha-933175: {Iface:virbr1 ExpiryTime:2025-12-01 20:29:47 +0000 UTC Type:0 Mac:52:54:00:af:ba:15 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-933175-m04 Clientid:01:52:54:00:af:ba:15}
	I1201 19:31:49.565096   30930 main.go:143] libmachine: domain ha-933175-m04 has defined IP address 192.168.39.46 and MAC address 52:54:00:af:ba:15 in network mk-ha-933175
	I1201 19:31:49.565258   30930 host.go:66] Checking if "ha-933175-m04" exists ...
	I1201 19:31:49.565456   30930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:31:49.567569   30930 main.go:143] libmachine: domain ha-933175-m04 has defined MAC address 52:54:00:af:ba:15 in network mk-ha-933175
	I1201 19:31:49.567945   30930 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:af:ba:15", ip: ""} in network mk-ha-933175: {Iface:virbr1 ExpiryTime:2025-12-01 20:29:47 +0000 UTC Type:0 Mac:52:54:00:af:ba:15 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:ha-933175-m04 Clientid:01:52:54:00:af:ba:15}
	I1201 19:31:49.567966   30930 main.go:143] libmachine: domain ha-933175-m04 has defined IP address 192.168.39.46 and MAC address 52:54:00:af:ba:15 in network mk-ha-933175
	I1201 19:31:49.568097   30930 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/ha-933175-m04/id_rsa Username:docker}
	I1201 19:31:49.651219   30930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:31:49.673014   30930 status.go:176] ha-933175-m04 status: &{Name:ha-933175-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (80.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 node start m02 --alsologtostderr -v 5: (33.587319456s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (350.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 stop --alsologtostderr -v 5
E1201 19:32:41.101360   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:33:09.497561   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:35:25.633942   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:35:51.295651   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:35:53.339402   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 stop --alsologtostderr -v 5: (3m56.131161873s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 start --wait true --alsologtostderr -v 5
E1201 19:37:14.363889   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:37:41.099107   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 start --wait true --alsologtostderr -v 5: (1m54.030034686s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (350.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 node delete m03 --alsologtostderr -v 5: (17.782139772s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (255.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 stop --alsologtostderr -v 5
E1201 19:40:25.635549   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:40:51.295082   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:42:41.102997   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 stop --alsologtostderr -v 5: (4m15.928840198s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5: exit status 7 (62.585319ms)

                                                
                                                
-- stdout --
	ha-933175
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-933175-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-933175-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:42:50.574395   34158 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:42:50.574507   34158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:42:50.574515   34158 out.go:374] Setting ErrFile to fd 2...
	I1201 19:42:50.574521   34158 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:42:50.574731   34158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:42:50.574922   34158 out.go:368] Setting JSON to false
	I1201 19:42:50.574951   34158 mustload.go:66] Loading cluster: ha-933175
	I1201 19:42:50.575072   34158 notify.go:221] Checking for updates...
	I1201 19:42:50.575333   34158 config.go:182] Loaded profile config "ha-933175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:42:50.575349   34158 status.go:174] checking status of ha-933175 ...
	I1201 19:42:50.577595   34158 status.go:371] ha-933175 host status = "Stopped" (err=<nil>)
	I1201 19:42:50.577610   34158 status.go:384] host is not running, skipping remaining checks
	I1201 19:42:50.577616   34158 status.go:176] ha-933175 status: &{Name:ha-933175 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:42:50.577635   34158 status.go:174] checking status of ha-933175-m02 ...
	I1201 19:42:50.578634   34158 status.go:371] ha-933175-m02 host status = "Stopped" (err=<nil>)
	I1201 19:42:50.578648   34158 status.go:384] host is not running, skipping remaining checks
	I1201 19:42:50.578653   34158 status.go:176] ha-933175-m02 status: &{Name:ha-933175-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:42:50.578667   34158 status.go:174] checking status of ha-933175-m04 ...
	I1201 19:42:50.579918   34158 status.go:371] ha-933175-m04 host status = "Stopped" (err=<nil>)
	I1201 19:42:50.579932   34158 status.go:384] host is not running, skipping remaining checks
	I1201 19:42:50.579936   34158 status.go:176] ha-933175-m04 status: &{Name:ha-933175-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (255.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (91.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m30.835910263s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (91.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 node add --control-plane --alsologtostderr -v 5
E1201 19:45:25.633745   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-933175 node add --control-plane --alsologtostderr -v 5: (1m18.19150816s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-933175 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-713998 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1201 19:45:51.295046   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:46:48.701334   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-713998 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.224926276s)
--- PASS: TestJSONOutput/start/Command (76.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-713998 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-713998 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-713998 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-713998 --output=json --user=testUser: (6.835456079s)
--- PASS: TestJSONOutput/stop/Command (6.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-385147 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-385147 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (78.601952ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"102d38e5-45db-4707-ac7b-d02315593740","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-385147] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"36ed9206-1560-4046-9a77-24ec80574e54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"fe286624-ecff-4814-a566-5f2e0d85d968","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9d119df6-3acf-4470-9f09-ecf6c637c0a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig"}}
	{"specversion":"1.0","id":"c71ff0b6-1e21-4a9e-8f64-b6dacceb2d59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube"}}
	{"specversion":"1.0","id":"22ee3c41-d236-48bd-b535-9a0703d5e198","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3e557889-1e56-4ca6-b6bc-d798aa3ffcd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"582150f6-8721-47f8-aa60-1f8002e4e040","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-385147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-385147
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (82.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-068285 --driver=kvm2  --container-runtime=crio
E1201 19:47:41.103024   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-068285 --driver=kvm2  --container-runtime=crio: (40.092416015s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-070380 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-070380 --driver=kvm2  --container-runtime=crio: (40.004636557s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-068285
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-070380
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-070380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-070380
helpers_test.go:175: Cleaning up "first-068285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-068285
--- PASS: TestMinikubeProfile (82.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-048155 --memory=3072 --mount-string /tmp/TestMountStartserial1642633947/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-048155 --memory=3072 --mount-string /tmp/TestMountStartserial1642633947/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.760240012s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-048155 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-048155 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-063531 --memory=3072 --mount-string /tmp/TestMountStartserial1642633947/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-063531 --memory=3072 --mount-string /tmp/TestMountStartserial1642633947/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.090471882s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-063531 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-063531 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-048155 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-063531 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-063531 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-063531
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-063531: (1.239946912s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-063531
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-063531: (17.630421484s)
--- PASS: TestMountStart/serial/RestartStopped (18.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-063531 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-063531 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-685862 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1201 19:50:25.634288   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:50:51.295265   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-685862 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m6.769102611s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-685862 -- rollout status deployment/busybox: (3.920614092s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-764mf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-sq4hh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-764mf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-sq4hh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-764mf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-sq4hh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-764mf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-764mf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-sq4hh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-685862 -- exec busybox-7b57f96db7-sq4hh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-685862 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-685862 -v=5 --alsologtostderr: (44.908418258s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.36s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-685862 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp testdata/cp-test.txt multinode-685862:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp multinode-685862:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3378463157/001/cp-test_multinode-685862.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp multinode-685862:/home/docker/cp-test.txt multinode-685862-m02:/home/docker/cp-test_multinode-685862_multinode-685862-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m02 "sudo cat /home/docker/cp-test_multinode-685862_multinode-685862-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp multinode-685862:/home/docker/cp-test.txt multinode-685862-m03:/home/docker/cp-test_multinode-685862_multinode-685862-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m03 "sudo cat /home/docker/cp-test_multinode-685862_multinode-685862-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp testdata/cp-test.txt multinode-685862-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp multinode-685862-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3378463157/001/cp-test_multinode-685862-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp multinode-685862-m02:/home/docker/cp-test.txt multinode-685862:/home/docker/cp-test_multinode-685862-m02_multinode-685862.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862 "sudo cat /home/docker/cp-test_multinode-685862-m02_multinode-685862.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp multinode-685862-m02:/home/docker/cp-test.txt multinode-685862-m03:/home/docker/cp-test_multinode-685862-m02_multinode-685862-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m03 "sudo cat /home/docker/cp-test_multinode-685862-m02_multinode-685862-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp testdata/cp-test.txt multinode-685862-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp multinode-685862-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3378463157/001/cp-test_multinode-685862-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp multinode-685862-m03:/home/docker/cp-test.txt multinode-685862:/home/docker/cp-test_multinode-685862-m03_multinode-685862.txt
E1201 19:52:41.099210   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862 "sudo cat /home/docker/cp-test_multinode-685862-m03_multinode-685862.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 cp multinode-685862-m03:/home/docker/cp-test.txt multinode-685862-m02:/home/docker/cp-test_multinode-685862-m03_multinode-685862-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 ssh -n multinode-685862-m02 "sudo cat /home/docker/cp-test_multinode-685862-m03_multinode-685862-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-685862 node stop m03: (1.840912876s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-685862 status: exit status 7 (329.152233ms)

                                                
                                                
-- stdout --
	multinode-685862
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-685862-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-685862-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-685862 status --alsologtostderr: exit status 7 (339.29811ms)

                                                
                                                
-- stdout --
	multinode-685862
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-685862-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-685862-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 19:52:44.359002   39787 out.go:360] Setting OutFile to fd 1 ...
	I1201 19:52:44.359115   39787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:52:44.359123   39787 out.go:374] Setting ErrFile to fd 2...
	I1201 19:52:44.359127   39787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 19:52:44.359331   39787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 19:52:44.359498   39787 out.go:368] Setting JSON to false
	I1201 19:52:44.359526   39787 mustload.go:66] Loading cluster: multinode-685862
	I1201 19:52:44.359662   39787 notify.go:221] Checking for updates...
	I1201 19:52:44.359957   39787 config.go:182] Loaded profile config "multinode-685862": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 19:52:44.359973   39787 status.go:174] checking status of multinode-685862 ...
	I1201 19:52:44.361816   39787 status.go:371] multinode-685862 host status = "Running" (err=<nil>)
	I1201 19:52:44.361846   39787 host.go:66] Checking if "multinode-685862" exists ...
	I1201 19:52:44.364248   39787 main.go:143] libmachine: domain multinode-685862 has defined MAC address 52:54:00:8c:a5:38 in network mk-multinode-685862
	I1201 19:52:44.364665   39787 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:a5:38", ip: ""} in network mk-multinode-685862: {Iface:virbr1 ExpiryTime:2025-12-01 20:49:51 +0000 UTC Type:0 Mac:52:54:00:8c:a5:38 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-685862 Clientid:01:52:54:00:8c:a5:38}
	I1201 19:52:44.364695   39787 main.go:143] libmachine: domain multinode-685862 has defined IP address 192.168.39.123 and MAC address 52:54:00:8c:a5:38 in network mk-multinode-685862
	I1201 19:52:44.364813   39787 host.go:66] Checking if "multinode-685862" exists ...
	I1201 19:52:44.364998   39787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:52:44.366822   39787 main.go:143] libmachine: domain multinode-685862 has defined MAC address 52:54:00:8c:a5:38 in network mk-multinode-685862
	I1201 19:52:44.367223   39787 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8c:a5:38", ip: ""} in network mk-multinode-685862: {Iface:virbr1 ExpiryTime:2025-12-01 20:49:51 +0000 UTC Type:0 Mac:52:54:00:8c:a5:38 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-685862 Clientid:01:52:54:00:8c:a5:38}
	I1201 19:52:44.367247   39787 main.go:143] libmachine: domain multinode-685862 has defined IP address 192.168.39.123 and MAC address 52:54:00:8c:a5:38 in network mk-multinode-685862
	I1201 19:52:44.367365   39787 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/multinode-685862/id_rsa Username:docker}
	I1201 19:52:44.451630   39787 ssh_runner.go:195] Run: systemctl --version
	I1201 19:52:44.458186   39787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:52:44.475432   39787 kubeconfig.go:125] found "multinode-685862" server: "https://192.168.39.123:8443"
	I1201 19:52:44.475470   39787 api_server.go:166] Checking apiserver status ...
	I1201 19:52:44.475525   39787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1201 19:52:44.497119   39787 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	W1201 19:52:44.512621   39787 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1201 19:52:44.512697   39787 ssh_runner.go:195] Run: ls
	I1201 19:52:44.524461   39787 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I1201 19:52:44.529615   39787 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I1201 19:52:44.529646   39787 status.go:463] multinode-685862 apiserver status = Running (err=<nil>)
	I1201 19:52:44.529658   39787 status.go:176] multinode-685862 status: &{Name:multinode-685862 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:52:44.529686   39787 status.go:174] checking status of multinode-685862-m02 ...
	I1201 19:52:44.531739   39787 status.go:371] multinode-685862-m02 host status = "Running" (err=<nil>)
	I1201 19:52:44.531763   39787 host.go:66] Checking if "multinode-685862-m02" exists ...
	I1201 19:52:44.534489   39787 main.go:143] libmachine: domain multinode-685862-m02 has defined MAC address 52:54:00:6d:43:d4 in network mk-multinode-685862
	I1201 19:52:44.534972   39787 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:43:d4", ip: ""} in network mk-multinode-685862: {Iface:virbr1 ExpiryTime:2025-12-01 20:51:15 +0000 UTC Type:0 Mac:52:54:00:6d:43:d4 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:multinode-685862-m02 Clientid:01:52:54:00:6d:43:d4}
	I1201 19:52:44.535002   39787 main.go:143] libmachine: domain multinode-685862-m02 has defined IP address 192.168.39.34 and MAC address 52:54:00:6d:43:d4 in network mk-multinode-685862
	I1201 19:52:44.535142   39787 host.go:66] Checking if "multinode-685862-m02" exists ...
	I1201 19:52:44.535335   39787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1201 19:52:44.537547   39787 main.go:143] libmachine: domain multinode-685862-m02 has defined MAC address 52:54:00:6d:43:d4 in network mk-multinode-685862
	I1201 19:52:44.537945   39787 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6d:43:d4", ip: ""} in network mk-multinode-685862: {Iface:virbr1 ExpiryTime:2025-12-01 20:51:15 +0000 UTC Type:0 Mac:52:54:00:6d:43:d4 Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:multinode-685862-m02 Clientid:01:52:54:00:6d:43:d4}
	I1201 19:52:44.537968   39787 main.go:143] libmachine: domain multinode-685862-m02 has defined IP address 192.168.39.34 and MAC address 52:54:00:6d:43:d4 in network mk-multinode-685862
	I1201 19:52:44.538125   39787 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-12903/.minikube/machines/multinode-685862-m02/id_rsa Username:docker}
	I1201 19:52:44.622618   39787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1201 19:52:44.638520   39787 status.go:176] multinode-685862-m02 status: &{Name:multinode-685862-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1201 19:52:44.638578   39787 status.go:174] checking status of multinode-685862-m03 ...
	I1201 19:52:44.640246   39787 status.go:371] multinode-685862-m03 host status = "Stopped" (err=<nil>)
	I1201 19:52:44.640262   39787 status.go:384] host is not running, skipping remaining checks
	I1201 19:52:44.640267   39787 status.go:176] multinode-685862-m03 status: &{Name:multinode-685862-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-685862 node start m03 -v=5 --alsologtostderr: (40.663687209s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (289.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-685862
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-685862
E1201 19:53:54.367324   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:55:25.637114   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 19:55:51.294775   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-685862: (2m46.813122266s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-685862 --wait=true -v=5 --alsologtostderr
E1201 19:57:41.099577   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-685862 --wait=true -v=5 --alsologtostderr: (2m2.918833133s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-685862
--- PASS: TestMultiNode/serial/RestartKeepsNodes (289.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-685862 node delete m03: (2.057210333s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (173.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 stop
E1201 20:00:25.636987   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:00:51.294673   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-685862 stop: (2m53.780817924s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-685862 status: exit status 7 (61.602178ms)

                                                
                                                
-- stdout --
	multinode-685862
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-685862-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-685862 status --alsologtostderr: exit status 7 (61.725093ms)

                                                
                                                
-- stdout --
	multinode-685862
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-685862-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:01:12.093676   42099 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:01:12.093782   42099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:01:12.093790   42099 out.go:374] Setting ErrFile to fd 2...
	I1201 20:01:12.093794   42099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:01:12.093987   42099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 20:01:12.094136   42099 out.go:368] Setting JSON to false
	I1201 20:01:12.094159   42099 mustload.go:66] Loading cluster: multinode-685862
	I1201 20:01:12.094284   42099 notify.go:221] Checking for updates...
	I1201 20:01:12.094536   42099 config.go:182] Loaded profile config "multinode-685862": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:01:12.094550   42099 status.go:174] checking status of multinode-685862 ...
	I1201 20:01:12.096727   42099 status.go:371] multinode-685862 host status = "Stopped" (err=<nil>)
	I1201 20:01:12.096743   42099 status.go:384] host is not running, skipping remaining checks
	I1201 20:01:12.096747   42099 status.go:176] multinode-685862 status: &{Name:multinode-685862 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1201 20:01:12.096782   42099 status.go:174] checking status of multinode-685862-m02 ...
	I1201 20:01:12.098181   42099 status.go:371] multinode-685862-m02 host status = "Stopped" (err=<nil>)
	I1201 20:01:12.098194   42099 status.go:384] host is not running, skipping remaining checks
	I1201 20:01:12.098198   42099 status.go:176] multinode-685862-m02 status: &{Name:multinode-685862-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (173.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-685862 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1201 20:02:24.177631   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-685862 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m22.448770858s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-685862 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-685862
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-685862-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-685862-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (76.005958ms)

                                                
                                                
-- stdout --
	* [multinode-685862-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-685862-m02' is duplicated with machine name 'multinode-685862-m02' in profile 'multinode-685862'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-685862-m03 --driver=kvm2  --container-runtime=crio
E1201 20:02:41.102746   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-685862-m03 --driver=kvm2  --container-runtime=crio: (37.639350061s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-685862
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-685862: exit status 80 (206.032394ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-685862 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-685862-m03 already exists in multinode-685862-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-685862-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.79s)

                                                
                                    
x
+
TestScheduledStopUnix (107.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-733414 --memory=3072 --driver=kvm2  --container-runtime=crio
E1201 20:05:51.295505   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-733414 --memory=3072 --driver=kvm2  --container-runtime=crio: (36.119692173s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-733414 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1201 20:06:16.797733   44373 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:06:16.797896   44373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:06:16.797906   44373 out.go:374] Setting ErrFile to fd 2...
	I1201 20:06:16.797911   44373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:06:16.798122   44373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 20:06:16.798394   44373 out.go:368] Setting JSON to false
	I1201 20:06:16.798497   44373 mustload.go:66] Loading cluster: scheduled-stop-733414
	I1201 20:06:16.798847   44373 config.go:182] Loaded profile config "scheduled-stop-733414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:06:16.798929   44373 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/config.json ...
	I1201 20:06:16.799141   44373 mustload.go:66] Loading cluster: scheduled-stop-733414
	I1201 20:06:16.799276   44373 config.go:182] Loaded profile config "scheduled-stop-733414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-733414 -n scheduled-stop-733414
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-733414 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1201 20:06:17.089514   44419 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:06:17.089738   44419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:06:17.089746   44419 out.go:374] Setting ErrFile to fd 2...
	I1201 20:06:17.089749   44419 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:06:17.089948   44419 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 20:06:17.090157   44419 out.go:368] Setting JSON to false
	I1201 20:06:17.090364   44419 daemonize_unix.go:73] killing process 44408 as it is an old scheduled stop
	I1201 20:06:17.090473   44419 mustload.go:66] Loading cluster: scheduled-stop-733414
	I1201 20:06:17.090842   44419 config.go:182] Loaded profile config "scheduled-stop-733414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:06:17.090926   44419 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/config.json ...
	I1201 20:06:17.091111   44419 mustload.go:66] Loading cluster: scheduled-stop-733414
	I1201 20:06:17.091238   44419 config.go:182] Loaded profile config "scheduled-stop-733414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1201 20:06:17.096027   16868 retry.go:31] will retry after 140.806µs: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.097188   16868 retry.go:31] will retry after 88.808µs: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.098330   16868 retry.go:31] will retry after 262.029µs: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.099493   16868 retry.go:31] will retry after 296.258µs: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.100634   16868 retry.go:31] will retry after 329.111µs: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.101762   16868 retry.go:31] will retry after 656.407µs: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.102901   16868 retry.go:31] will retry after 1.072561ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.104028   16868 retry.go:31] will retry after 1.746652ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.106241   16868 retry.go:31] will retry after 3.118787ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.110535   16868 retry.go:31] will retry after 2.848283ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.113803   16868 retry.go:31] will retry after 8.182779ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.123063   16868 retry.go:31] will retry after 6.281354ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.130309   16868 retry.go:31] will retry after 16.988697ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.147569   16868 retry.go:31] will retry after 19.074678ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.166782   16868 retry.go:31] will retry after 17.882557ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
I1201 20:06:17.185116   16868 retry.go:31] will retry after 48.4564ms: open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-733414 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-733414 -n scheduled-stop-733414
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-733414
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-733414 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1201 20:06:42.807112   44568 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:06:42.807394   44568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:06:42.807405   44568 out.go:374] Setting ErrFile to fd 2...
	I1201 20:06:42.807409   44568 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:06:42.807685   44568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 20:06:42.807972   44568 out.go:368] Setting JSON to false
	I1201 20:06:42.808067   44568 mustload.go:66] Loading cluster: scheduled-stop-733414
	I1201 20:06:42.808402   44568 config.go:182] Loaded profile config "scheduled-stop-733414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1201 20:06:42.808496   44568 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/scheduled-stop-733414/config.json ...
	I1201 20:06:42.808696   44568 mustload.go:66] Loading cluster: scheduled-stop-733414
	I1201 20:06:42.808817   44568 config.go:182] Loaded profile config "scheduled-stop-733414": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-733414
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-733414: exit status 7 (61.454487ms)

                                                
                                                
-- stdout --
	scheduled-stop-733414
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-733414 -n scheduled-stop-733414
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-733414 -n scheduled-stop-733414: exit status 7 (60.084517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-733414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-733414
--- PASS: TestScheduledStopUnix (107.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (391.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2566981632 start -p running-upgrade-399758 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1201 20:07:41.098631   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2566981632 start -p running-upgrade-399758 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m33.324884371s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-399758 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-399758 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m54.036147234s)
helpers_test.go:175: Cleaning up "running-upgrade-399758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-399758
E1201 20:13:59.938492   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestRunningBinaryUpgrade (391.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (102.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.008824516s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-903802
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-903802: (1.983112776s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-903802 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-903802 status --format={{.Host}}: exit status 7 (64.399099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.011536892s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-903802 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
E1201 20:12:41.098327   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (1.486302574s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-903802] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-903802
	    minikube start -p kubernetes-upgrade-903802 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9038022 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-903802 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-903802 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (16.277439921s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-903802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-903802
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-903802: (1.005878292s)
--- PASS: TestKubernetesUpgrade (102.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (81.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-539916 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-539916 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m21.112639338s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (81.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327373 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-327373 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (95.175909ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-327373] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (77.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327373 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-327373 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.076419671s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-327373 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (77.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327373 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-327373 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (4.235992786s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-327373 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-327373 status -o json: exit status 2 (240.805656ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-327373","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-327373
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-327373: (1.411831062s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-539916 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [52388ee2-f291-4467-a6b4-88d98c216760] Pending
helpers_test.go:352: "busybox" [52388ee2-f291-4467-a6b4-88d98c216760] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [52388ee2-f291-4467-a6b4-88d98c216760] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.00574251s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-539916 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (23.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327373 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-327373 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (23.078270971s)
--- PASS: TestNoKubernetes/serial/Start (23.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-539916 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-539916 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.175728209s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-539916 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (85.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-539916 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-539916 --alsologtostderr -v=3: (1m25.824202057s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (85.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-12903/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-327373 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-327373 "sudo systemctl is-active --quiet service kubelet": exit status 1 (177.991663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (3.881866806s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.186753929s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-327373
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-327373: (1.544970557s)
--- PASS: TestNoKubernetes/serial/Stop (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (33.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327373 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-327373 --driver=kvm2  --container-runtime=crio: (33.092626584s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (33.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-327373 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-327373 "sudo systemctl is-active --quiet service kubelet": exit status 1 (167.916612ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-058896 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-058896 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (116.418718ms)

                                                
                                                
-- stdout --
	* [false-058896] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1201 20:09:55.950033   47605 out.go:360] Setting OutFile to fd 1 ...
	I1201 20:09:55.950296   47605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:55.950306   47605 out.go:374] Setting ErrFile to fd 2...
	I1201 20:09:55.950313   47605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1201 20:09:55.950542   47605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-12903/.minikube/bin
	I1201 20:09:55.951027   47605 out.go:368] Setting JSON to false
	I1201 20:09:55.951946   47605 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6739,"bootTime":1764613057,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1201 20:09:55.952003   47605 start.go:143] virtualization: kvm guest
	I1201 20:09:55.953983   47605 out.go:179] * [false-058896] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1201 20:09:55.955526   47605 out.go:179]   - MINIKUBE_LOCATION=21997
	I1201 20:09:55.955531   47605 notify.go:221] Checking for updates...
	I1201 20:09:55.957938   47605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1201 20:09:55.958959   47605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-12903/kubeconfig
	I1201 20:09:55.960190   47605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-12903/.minikube
	I1201 20:09:55.963025   47605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1201 20:09:55.964272   47605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1201 20:09:55.965851   47605 config.go:182] Loaded profile config "NoKubernetes-327373": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1201 20:09:55.965991   47605 config.go:182] Loaded profile config "old-k8s-version-539916": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1201 20:09:55.966088   47605 config.go:182] Loaded profile config "running-upgrade-399758": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1201 20:09:55.966192   47605 driver.go:422] Setting default libvirt URI to qemu:///system
	I1201 20:09:56.001652   47605 out.go:179] * Using the kvm2 driver based on user configuration
	I1201 20:09:56.002721   47605 start.go:309] selected driver: kvm2
	I1201 20:09:56.002735   47605 start.go:927] validating driver "kvm2" against <nil>
	I1201 20:09:56.002746   47605 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1201 20:09:56.004311   47605 out.go:203] 
	W1201 20:09:56.005532   47605 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1201 20:09:56.006867   47605 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-058896 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-058896" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:08:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.4:8443
name: old-k8s-version-539916
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:09:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.162:8443
name: running-upgrade-399758
contexts:
- context:
cluster: old-k8s-version-539916
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:08:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: old-k8s-version-539916
name: old-k8s-version-539916
- context:
cluster: running-upgrade-399758
user: running-upgrade-399758
name: running-upgrade-399758
current-context: ""
kind: Config
users:
- name: old-k8s-version-539916
user:
client-certificate: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt
client-key: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.key
- name: running-upgrade-399758
user:
client-certificate: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/running-upgrade-399758/client.crt
client-key: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/running-upgrade-399758/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-058896

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058896"

                                                
                                                
----------------------- debugLogs end: false-058896 [took: 3.2553569s] --------------------------------
helpers_test.go:175: Cleaning up "false-058896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-058896
--- PASS: TestNetworkPlugins/group/false (3.54s)

                                                
                                    
x
+
TestISOImage/Setup (22.34s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-790070 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-790070 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.342142096s)
--- PASS: TestISOImage/Setup (22.34s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.24s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.24s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which curl"
E1201 20:20:52.167959   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which docker"
E1201 20:20:51.924347   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:20:52.005884   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which iptables"
E1201 20:20:51.842562   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:20:51.849032   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:20:51.860558   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:20:51.882068   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/iptables (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which socat"
E1201 20:20:51.294936   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/socat (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-539916 -n old-k8s-version-539916
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-539916 -n old-k8s-version-539916: exit status 7 (71.530849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-539916 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (78.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-539916 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1201 20:10:34.371341   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:10:51.295300   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-539916 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m17.833125319s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-539916 -n old-k8s-version-539916
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (78.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h6k77" [94fb9fcb-ffab-4d90-8ae2-d18dd762470b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h6k77" [94fb9fcb-ffab-4d90-8ae2-d18dd762470b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.003865183s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-h6k77" [94fb9fcb-ffab-4d90-8ae2-d18dd762470b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008527402s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-539916 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-539916 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-539916 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-539916 -n old-k8s-version-539916
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-539916 -n old-k8s-version-539916: exit status 2 (266.238391ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-539916 -n old-k8s-version-539916
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-539916 -n old-k8s-version-539916: exit status 2 (235.864255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-539916 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-539916 -n old-k8s-version-539916
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-539916 -n old-k8s-version-539916
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (80.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.393438071 start -p stopped-upgrade-921033 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.393438071 start -p stopped-upgrade-921033 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (47.872789229s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.393438071 -p stopped-upgrade-921033 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.393438071 -p stopped-upgrade-921033 stop: (1.747422971s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-921033 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-921033 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (31.041319519s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (80.66s)

                                                
                                    
x
+
TestPause/serial/Start (93.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-092823 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-092823 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m33.652240224s)
--- PASS: TestPause/serial/Start (93.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-921033
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-921033: (1.23256139s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (91.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-200621 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1201 20:13:49.682241   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:49.688703   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:49.700217   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:49.721699   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:49.763515   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:49.845770   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:50.007915   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:50.329711   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:50.971641   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:52.253754   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:13:54.816002   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-200621 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m31.482660895s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (91.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-240409 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1201 20:14:30.661614   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-240409 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m26.641395678s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-931553 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-931553 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m16.164683837s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-200621 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [82484e01-ebd0-4633-9bc5-ac02f358119c] Pending
helpers_test.go:352: "busybox" [82484e01-ebd0-4633-9bc5-ac02f358119c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [82484e01-ebd0-4633-9bc5-ac02f358119c] Running
E1201 20:15:11.622919   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004300863s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-200621 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-200621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-200621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.12418352s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-200621 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (87.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-200621 --alsologtostderr -v=3
E1201 20:15:25.633484   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-200621 --alsologtostderr -v=3: (1m27.137868904s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (87.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (53.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-785480 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1201 20:15:51.294742   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-162795/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-785480 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (53.035427121s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (53.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-240409 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [969b140a-43d4-491e-b8ea-2be5c95025fd] Pending
helpers_test.go:352: "busybox" [969b140a-43d4-491e-b8ea-2be5c95025fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [969b140a-43d4-491e-b8ea-2be5c95025fd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004619972s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-240409 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-240409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-240409 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11352175s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-240409 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-931553 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4cd260bb-596e-4790-9019-f3118dd29163] Pending
helpers_test.go:352: "busybox" [4cd260bb-596e-4790-9019-f3118dd29163] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4cd260bb-596e-4790-9019-f3118dd29163] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00428911s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-931553 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-240409 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-240409 --alsologtostderr -v=3: (1m23.302995155s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-931553 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-931553 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.114250074s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-931553 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (83.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-931553 --alsologtostderr -v=3
E1201 20:16:33.545091   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-931553 --alsologtostderr -v=3: (1m23.729930683s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (83.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-785480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-785480 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-785480 --alsologtostderr -v=3: (7.054538313s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-200621 -n embed-certs-200621
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-200621 -n embed-certs-200621: exit status 7 (71.005032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-200621 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-785480 -n newest-cni-785480
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-785480 -n newest-cni-785480: exit status 7 (71.366089ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-785480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-200621 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-200621 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (44.923519569s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-200621 -n embed-certs-200621
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (59.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-785480 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-785480 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (59.083541736s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-785480 -n newest-cni-785480
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (59.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-240409 -n default-k8s-diff-port-240409
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-240409 -n default-k8s-diff-port-240409: exit status 7 (67.275356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-240409 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t2nqj" [bdcd6583-62b5-404d-9674-425b02c9a6ff] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t2nqj" [bdcd6583-62b5-404d-9674-425b02c9a6ff] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004346767s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-240409 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-240409 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (48.508112654s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-240409 -n default-k8s-diff-port-240409
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-t2nqj" [bdcd6583-62b5-404d-9674-425b02c9a6ff] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004928594s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-200621 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-931553 -n no-preload-931553
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-931553 -n no-preload-931553: exit status 7 (87.834149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-931553 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-931553 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-931553 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m2.71879167s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-931553 -n no-preload-931553
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (63.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-200621 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-200621 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-200621 -n embed-certs-200621
E1201 20:17:41.098662   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-200621 -n embed-certs-200621: exit status 2 (251.595658ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-200621 -n embed-certs-200621
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-200621 -n embed-certs-200621: exit status 2 (241.886389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-200621 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-200621 -n embed-certs-200621
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-200621 -n embed-certs-200621
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-785480 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-785480 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-785480 -n newest-cni-785480
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-785480 -n newest-cni-785480: exit status 2 (233.417906ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-785480 -n newest-cni-785480
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-785480 -n newest-cni-785480: exit status 2 (263.888768ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-785480 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-785480 -n newest-cni-785480
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-785480 -n newest-cni-785480
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m41.978609512s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (100.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m40.121038817s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (100.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-75bcr" [eeec853c-b381-41e6-9c3f-e865ae3cc731] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-75bcr" [eeec853c-b381-41e6-9c3f-e865ae3cc731] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005763994s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-75bcr" [eeec853c-b381-41e6-9c3f-e865ae3cc731] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006368938s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-240409 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-240409 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-240409 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-240409 --alsologtostderr -v=1: (1.274125s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-240409 -n default-k8s-diff-port-240409
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-240409 -n default-k8s-diff-port-240409: exit status 2 (300.258973ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-240409 -n default-k8s-diff-port-240409
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-240409 -n default-k8s-diff-port-240409: exit status 2 (286.251546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-240409 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-240409 -n default-k8s-diff-port-240409
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-240409 -n default-k8s-diff-port-240409
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m21.748710284s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-2fwx6" [a9d867a2-9b1c-4fa4-8cbc-b499fced53e1] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-2fwx6" [a9d867a2-9b1c-4fa4-8cbc-b499fced53e1] Running
E1201 20:18:49.682600   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005235915s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-2fwx6" [a9d867a2-9b1c-4fa4-8cbc-b499fced53e1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004470548s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-931553 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-931553 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-931553 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-931553 --alsologtostderr -v=1: (1.195469977s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-931553 -n no-preload-931553
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-931553 -n no-preload-931553: exit status 2 (269.552792ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-931553 -n no-preload-931553
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-931553 -n no-preload-931553: exit status 2 (239.649505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-931553 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-931553 -n no-preload-931553
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-931553 -n no-preload-931553
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (79.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1201 20:19:04.179041   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/addons-153147/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:19:17.387099   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m19.421812922s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (79.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mnw49" [5ceb106a-d2f5-4466-9073-e53c05b336f5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004032355s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-058896 "pgrep -a kubelet"
I1201 20:19:26.377571   16868 config.go:182] Loaded profile config "auto-058896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-058896 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fqzkl" [3c8177c9-0762-41e8-a020-4f1cdd952c0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fqzkl" [3c8177c9-0762-41e8-a020-4f1cdd952c0f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00459554s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-058896 "pgrep -a kubelet"
I1201 20:19:32.370033   16868 config.go:182] Loaded profile config "kindnet-058896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-058896 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6l49s" [fb7f0aae-8e23-4fb5-bb50-8e237b963598] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6l49s" [fb7f0aae-8e23-4fb5-bb50-8e237b963598] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005439652s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-058896 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-058896 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (58.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (58.037262032s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (58.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-kf9l5" [8f5a7af8-d84b-4151-b125-dbedcf6195e5] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-kf9l5" [8f5a7af8-d84b-4151-b125-dbedcf6195e5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00480538s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (87.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m27.570386968s)
--- PASS: TestNetworkPlugins/group/flannel/Start (87.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-058896 "pgrep -a kubelet"
I1201 20:20:06.783924   16868 config.go:182] Loaded profile config "calico-058896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-058896 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hnhw8" [e65660c4-13cc-44b1-a8a0-3f344eb07112] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1201 20:20:08.705241   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-hnhw8" [e65660c4-13cc-44b1-a8a0-3f344eb07112] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006349318s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-058896 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-058896 "pgrep -a kubelet"
I1201 20:20:20.166469   16868 config.go:182] Loaded profile config "custom-flannel-058896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-058896 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mvd28" [46fa2121-099b-49fe-abf7-0b10663a0d5d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1201 20:20:25.633946   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/functional-510618/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mvd28" [46fa2121-099b-49fe-abf7-0b10663a0d5d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00506608s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-058896 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-058896 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m25.780914802s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.78s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
E1201 20:20:53.131232   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1201 20:20:53.216786   16868 config.go:182] Loaded profile config "enable-default-cni-058896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
E1201 20:20:52.489448   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-058896 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-058896 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qtwt7" [e5276643-c58d-401b-9ac2-f1091d7ae5f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qtwt7" [e5276643-c58d-401b-9ac2-f1091d7ae5f4] Running
E1201 20:21:02.096391   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:21:03.483927   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:21:03.490296   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:21:03.501635   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:21:03.523317   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:21:03.564780   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:21:03.646980   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:21:03.809295   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1201 20:21:04.131638   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004603356s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.19s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1764600683-21997
iso_test.go:118:   kicbase_version: v0.0.48-1764169655-21974
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 35a1a23c8991801825a9ca8eab844d9f0ceb5eab
--- PASS: TestISOImage/VersionJSON (0.19s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-790070 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)
E1201 20:20:56.974996   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-058896 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1201 20:21:04.773086   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-hmbn7" [d6289e79-0580-4d65-ab7a-98face26e871] Running
E1201 20:21:32.819653   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/default-k8s-diff-port-240409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003844058s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-058896 "pgrep -a kubelet"
I1201 20:21:35.264429   16868 config.go:182] Loaded profile config "flannel-058896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-058896 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4zwd7" [1c6b8d4b-004c-47f0-b7e6-5033deffe77e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4zwd7" [1c6b8d4b-004c-47f0-b7e6-5033deffe77e] Running
E1201 20:21:44.462059   16868 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/no-preload-931553/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004529608s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-058896 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-058896 "pgrep -a kubelet"
I1201 20:22:02.459329   16868 config.go:182] Loaded profile config "bridge-058896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-058896 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h7rsw" [709ead5f-8033-4ef7-bfc2-2d544b030fa9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h7rsw" [709ead5f-8033-4ef7-bfc2-2d544b030fa9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004189166s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-058896 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-058896 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (51/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.14
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.31
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
361 TestStartStop/group/disable-driver-mounts 0.19
379 TestNetworkPlugins/group/kubenet 3.43
389 TestNetworkPlugins/group/cilium 3.82
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1201 19:05:28.809326   16868 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1201 19:05:28.936415   16868 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
W1201 19:05:28.953919   16868 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-153147 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-893069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-893069
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-058896 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-058896" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:08:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.4:8443
name: old-k8s-version-539916
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:09:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.162:8443
name: running-upgrade-399758
contexts:
- context:
cluster: old-k8s-version-539916
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:08:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: old-k8s-version-539916
name: old-k8s-version-539916
- context:
cluster: running-upgrade-399758
user: running-upgrade-399758
name: running-upgrade-399758
current-context: ""
kind: Config
users:
- name: old-k8s-version-539916
user:
client-certificate: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt
client-key: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.key
- name: running-upgrade-399758
user:
client-certificate: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/running-upgrade-399758/client.crt
client-key: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/running-upgrade-399758/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-058896

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058896"

                                                
                                                
----------------------- debugLogs end: kubenet-058896 [took: 3.260880158s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-058896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-058896
--- SKIP: TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-058896 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-058896" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:08:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.4:8443
name: old-k8s-version-539916
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-12903/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:09:42 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.162:8443
name: running-upgrade-399758
contexts:
- context:
cluster: old-k8s-version-539916
extensions:
- extension:
last-update: Mon, 01 Dec 2025 20:08:16 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: old-k8s-version-539916
name: old-k8s-version-539916
- context:
cluster: running-upgrade-399758
user: running-upgrade-399758
name: running-upgrade-399758
current-context: ""
kind: Config
users:
- name: old-k8s-version-539916
user:
client-certificate: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.crt
client-key: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/old-k8s-version-539916/client.key
- name: running-upgrade-399758
user:
client-certificate: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/running-upgrade-399758/client.crt
client-key: /home/jenkins/minikube-integration/21997-12903/.minikube/profiles/running-upgrade-399758/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-058896

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-058896" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058896"

                                                
                                                
----------------------- debugLogs end: cilium-058896 [took: 3.660368701s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-058896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-058896
--- SKIP: TestNetworkPlugins/group/cilium (3.82s)

                                                
                                    
Copied to clipboard