Test Report: KVM_Linux_crio 22141

                    
                      2191194101c4a9ddc7fa6949616ce2e0ec39dec5:2025-12-16:42801
                    
                

Test fail (3/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 159.21
345 TestPreload 146.54
404 TestPause/serial/SecondStartNoReconfiguration 33.54
x
+
TestAddons/parallel/Ingress (159.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-153066 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-153066 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-153066 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [b42be6a9-0973-4607-a39f-f43345bc18fe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [b42be6a9-0973-4607-a39f-f43345bc18fe] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.004075282s
I1216 04:29:15.134377    8987 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-153066 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.632813885s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-153066 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.189
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-153066 -n addons-153066
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 logs -n 25: (1.164689177s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-292678                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-292678 │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
	│ start   │ --download-only -p binary-mirror-194309 --alsologtostderr --binary-mirror http://127.0.0.1:44661 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-194309 │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │                     │
	│ delete  │ -p binary-mirror-194309                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-194309 │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:26 UTC │
	│ addons  │ enable dashboard -p addons-153066                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │                     │
	│ addons  │ disable dashboard -p addons-153066                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │                     │
	│ start   │ -p addons-153066 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:26 UTC │ 16 Dec 25 04:28 UTC │
	│ addons  │ addons-153066 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
	│ addons  │ addons-153066 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
	│ addons  │ enable headlamp -p addons-153066 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
	│ addons  │ addons-153066 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
	│ addons  │ addons-153066 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:28 UTC │
	│ addons  │ addons-153066 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:28 UTC │ 16 Dec 25 04:29 UTC │
	│ addons  │ addons-153066 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ ip      │ addons-153066 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ addons  │ addons-153066 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ addons  │ addons-153066 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ ssh     │ addons-153066 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │                     │
	│ ssh     │ addons-153066 ssh cat /opt/local-path-provisioner/pvc-f15dac49-fd5a-496e-bac7-888f900e7fe3_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ addons  │ addons-153066 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:30 UTC │
	│ addons  │ addons-153066 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-153066                                                                                                                                                                                                                                                                                                                                                                                         │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ addons  │ addons-153066 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ addons  │ addons-153066 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ addons  │ addons-153066 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:29 UTC │ 16 Dec 25 04:29 UTC │
	│ ip      │ addons-153066 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-153066        │ jenkins │ v1.37.0 │ 16 Dec 25 04:31 UTC │ 16 Dec 25 04:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:26:12
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:26:12.434032    9940 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:26:12.434245    9940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:26:12.434253    9940 out.go:374] Setting ErrFile to fd 2...
	I1216 04:26:12.434257    9940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:26:12.434445    9940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 04:26:12.434978    9940 out.go:368] Setting JSON to false
	I1216 04:26:12.435725    9940 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":514,"bootTime":1765858658,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:26:12.435797    9940 start.go:143] virtualization: kvm guest
	I1216 04:26:12.437635    9940 out.go:179] * [addons-153066] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:26:12.438703    9940 notify.go:221] Checking for updates...
	I1216 04:26:12.438813    9940 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:26:12.440332    9940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:26:12.441519    9940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 04:26:12.442608    9940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:26:12.443640    9940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:26:12.444763    9940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:26:12.446095    9940 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:26:12.475371    9940 out.go:179] * Using the kvm2 driver based on user configuration
	I1216 04:26:12.476581    9940 start.go:309] selected driver: kvm2
	I1216 04:26:12.476592    9940 start.go:927] validating driver "kvm2" against <nil>
	I1216 04:26:12.476602    9940 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:26:12.477269    9940 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:26:12.477491    9940 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:26:12.477513    9940 cni.go:84] Creating CNI manager for ""
	I1216 04:26:12.477586    9940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 04:26:12.477596    9940 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 04:26:12.477632    9940 start.go:353] cluster config:
	{Name:addons-153066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1216 04:26:12.477712    9940 iso.go:125] acquiring lock: {Name:mk32a15185e6e6998579c2a7c92376b162445713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:26:12.479042    9940 out.go:179] * Starting "addons-153066" primary control-plane node in "addons-153066" cluster
	I1216 04:26:12.480141    9940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:26:12.480164    9940 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 04:26:12.480170    9940 cache.go:65] Caching tarball of preloaded images
	I1216 04:26:12.480241    9940 preload.go:238] Found /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 04:26:12.480251    9940 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 04:26:12.480552    9940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/config.json ...
	I1216 04:26:12.480573    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/config.json: {Name:mk46adce3dd880825a7aefcae063e7ae67cca56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:12.480697    9940 start.go:360] acquireMachinesLock for addons-153066: {Name:mk62c9c2852efe4dee40756b90f6ebee1eabe893 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 04:26:12.480738    9940 start.go:364] duration metric: took 29.539µs to acquireMachinesLock for "addons-153066"
	I1216 04:26:12.480754    9940 start.go:93] Provisioning new machine with config: &{Name:addons-153066 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:26:12.480821    9940 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 04:26:12.482422    9940 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1216 04:26:12.482565    9940 start.go:159] libmachine.API.Create for "addons-153066" (driver="kvm2")
	I1216 04:26:12.482592    9940 client.go:173] LocalClient.Create starting
	I1216 04:26:12.482665    9940 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem
	I1216 04:26:12.641223    9940 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem
	I1216 04:26:12.724611    9940 main.go:143] libmachine: creating domain...
	I1216 04:26:12.724632    9940 main.go:143] libmachine: creating network...
	I1216 04:26:12.725967    9940 main.go:143] libmachine: found existing default network
	I1216 04:26:12.726128    9940 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 04:26:12.726636    9940 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e54870}
	I1216 04:26:12.726720    9940 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-153066</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 04:26:12.732885    9940 main.go:143] libmachine: creating private network mk-addons-153066 192.168.39.0/24...
	I1216 04:26:12.795891    9940 main.go:143] libmachine: private network mk-addons-153066 192.168.39.0/24 created
	I1216 04:26:12.796151    9940 main.go:143] libmachine: <network>
	  <name>mk-addons-153066</name>
	  <uuid>f6816a7a-c807-42a7-8e60-9e09a60af5c0</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:25:8a:31'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 04:26:12.796180    9940 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066 ...
	I1216 04:26:12.796200    9940 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22141-5059/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1216 04:26:12.796211    9940 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:26:12.796273    9940 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22141-5059/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22141-5059/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
	I1216 04:26:13.064358    9940 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa...
	I1216 04:26:13.107282    9940 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/addons-153066.rawdisk...
	I1216 04:26:13.107323    9940 main.go:143] libmachine: Writing magic tar header
	I1216 04:26:13.107344    9940 main.go:143] libmachine: Writing SSH key tar header
	I1216 04:26:13.107419    9940 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066 ...
	I1216 04:26:13.107479    9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066
	I1216 04:26:13.107501    9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066 (perms=drwx------)
	I1216 04:26:13.107510    9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22141-5059/.minikube/machines
	I1216 04:26:13.107522    9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22141-5059/.minikube/machines (perms=drwxr-xr-x)
	I1216 04:26:13.107534    9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:26:13.107542    9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22141-5059/.minikube (perms=drwxr-xr-x)
	I1216 04:26:13.107552    9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22141-5059
	I1216 04:26:13.107559    9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22141-5059 (perms=drwxrwxr-x)
	I1216 04:26:13.107569    9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1216 04:26:13.107577    9940 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 04:26:13.107587    9940 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1216 04:26:13.107594    9940 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 04:26:13.107603    9940 main.go:143] libmachine: checking permissions on dir: /home
	I1216 04:26:13.107615    9940 main.go:143] libmachine: skipping /home - not owner
	I1216 04:26:13.107621    9940 main.go:143] libmachine: defining domain...
	I1216 04:26:13.108754    9940 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-153066</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/addons-153066.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-153066'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1216 04:26:13.115987    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:89:32:e6 in network default
	I1216 04:26:13.116527    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:13.116544    9940 main.go:143] libmachine: starting domain...
	I1216 04:26:13.116548    9940 main.go:143] libmachine: ensuring networks are active...
	I1216 04:26:13.117137    9940 main.go:143] libmachine: Ensuring network default is active
	I1216 04:26:13.117465    9940 main.go:143] libmachine: Ensuring network mk-addons-153066 is active
	I1216 04:26:13.117967    9940 main.go:143] libmachine: getting domain XML...
	I1216 04:26:13.118785    9940 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-153066</name>
	  <uuid>b9b65814-80b7-4e0a-92c8-4d21ede24ac3</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/addons-153066.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:c6:57:6e'/>
	      <source network='mk-addons-153066'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:89:32:e6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1216 04:26:14.371510    9940 main.go:143] libmachine: waiting for domain to start...
	I1216 04:26:14.372591    9940 main.go:143] libmachine: domain is now running
	I1216 04:26:14.372606    9940 main.go:143] libmachine: waiting for IP...
	I1216 04:26:14.373230    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:14.373797    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:14.373812    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:14.374048    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:14.374084    9940 retry.go:31] will retry after 206.182677ms: waiting for domain to come up
	I1216 04:26:14.581277    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:14.581761    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:14.581788    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:14.582071    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:14.582100    9940 retry.go:31] will retry after 293.803735ms: waiting for domain to come up
	I1216 04:26:14.877483    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:14.877990    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:14.878003    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:14.878242    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:14.878273    9940 retry.go:31] will retry after 366.70569ms: waiting for domain to come up
	I1216 04:26:15.246797    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:15.247378    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:15.247393    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:15.247824    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:15.247864    9940 retry.go:31] will retry after 388.153383ms: waiting for domain to come up
	I1216 04:26:15.637394    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:15.637888    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:15.637906    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:15.638219    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:15.638257    9940 retry.go:31] will retry after 698.046366ms: waiting for domain to come up
	I1216 04:26:16.338095    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:16.338614    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:16.338633    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:16.338897    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:16.338929    9940 retry.go:31] will retry after 725.381934ms: waiting for domain to come up
	I1216 04:26:17.065883    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:17.066447    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:17.066465    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:17.066802    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:17.066837    9940 retry.go:31] will retry after 1.128973689s: waiting for domain to come up
	I1216 04:26:18.197211    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:18.197736    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:18.197751    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:18.198068    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:18.198106    9940 retry.go:31] will retry after 1.258194359s: waiting for domain to come up
	I1216 04:26:19.458700    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:19.459255    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:19.459282    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:19.459610    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:19.459650    9940 retry.go:31] will retry after 1.218744169s: waiting for domain to come up
	I1216 04:26:20.679886    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:20.680439    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:20.680451    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:20.680764    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:20.680810    9940 retry.go:31] will retry after 1.442537405s: waiting for domain to come up
	I1216 04:26:22.125650    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:22.126346    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:22.126370    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:22.126726    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:22.126765    9940 retry.go:31] will retry after 2.564829172s: waiting for domain to come up
	I1216 04:26:24.694377    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:24.694948    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:24.694963    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:24.695211    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:24.695253    9940 retry.go:31] will retry after 2.37531298s: waiting for domain to come up
	I1216 04:26:27.072479    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:27.072976    9940 main.go:143] libmachine: no network interface addresses found for domain addons-153066 (source=lease)
	I1216 04:26:27.072989    9940 main.go:143] libmachine: trying to list again with source=arp
	I1216 04:26:27.073211    9940 main.go:143] libmachine: unable to find current IP address of domain addons-153066 in network mk-addons-153066 (interfaces detected: [])
	I1216 04:26:27.073242    9940 retry.go:31] will retry after 3.46923009s: waiting for domain to come up
	I1216 04:26:30.546096    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:30.546585    9940 main.go:143] libmachine: domain addons-153066 has current primary IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:30.546598    9940 main.go:143] libmachine: found domain IP: 192.168.39.189
	I1216 04:26:30.546605    9940 main.go:143] libmachine: reserving static IP address...
	I1216 04:26:30.546945    9940 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-153066", mac: "52:54:00:c6:57:6e", ip: "192.168.39.189"} in network mk-addons-153066
	I1216 04:26:30.728384    9940 main.go:143] libmachine: reserved static IP address 192.168.39.189 for domain addons-153066
	I1216 04:26:30.728410    9940 main.go:143] libmachine: waiting for SSH...
	I1216 04:26:30.728418    9940 main.go:143] libmachine: Getting to WaitForSSH function...
	I1216 04:26:30.730920    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:30.731291    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:30.731310    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:30.731554    9940 main.go:143] libmachine: Using SSH client type: native
	I1216 04:26:30.731792    9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1216 04:26:30.731803    9940 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1216 04:26:30.838568    9940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:26:30.838924    9940 main.go:143] libmachine: domain creation complete
	I1216 04:26:30.840487    9940 machine.go:94] provisionDockerMachine start ...
	I1216 04:26:30.842970    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:30.843298    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:30.843316    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:30.843453    9940 main.go:143] libmachine: Using SSH client type: native
	I1216 04:26:30.843716    9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1216 04:26:30.843732    9940 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 04:26:30.947446    9940 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 04:26:30.947479    9940 buildroot.go:166] provisioning hostname "addons-153066"
	I1216 04:26:30.950700    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:30.951140    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:30.951164    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:30.951359    9940 main.go:143] libmachine: Using SSH client type: native
	I1216 04:26:30.951608    9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1216 04:26:30.951622    9940 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-153066 && echo "addons-153066" | sudo tee /etc/hostname
	I1216 04:26:31.074594    9940 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-153066
	
	I1216 04:26:31.077336    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.077784    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.077811    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.078013    9940 main.go:143] libmachine: Using SSH client type: native
	I1216 04:26:31.078246    9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1216 04:26:31.078263    9940 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-153066' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-153066/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-153066' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 04:26:31.203843    9940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 04:26:31.203873    9940 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5059/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5059/.minikube}
	I1216 04:26:31.203908    9940 buildroot.go:174] setting up certificates
	I1216 04:26:31.203919    9940 provision.go:84] configureAuth start
	I1216 04:26:31.206493    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.206859    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.206890    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.209067    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.209362    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.209384    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.209499    9940 provision.go:143] copyHostCerts
	I1216 04:26:31.209558    9940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem (1675 bytes)
	I1216 04:26:31.209666    9940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem (1082 bytes)
	I1216 04:26:31.209751    9940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem (1123 bytes)
	I1216 04:26:31.209825    9940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem org=jenkins.addons-153066 san=[127.0.0.1 192.168.39.189 addons-153066 localhost minikube]
	I1216 04:26:31.303447    9940 provision.go:177] copyRemoteCerts
	I1216 04:26:31.303513    9940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 04:26:31.305998    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.307261    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.307288    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.307477    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:31.389937    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 04:26:31.418451    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 04:26:31.446584    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 04:26:31.475317    9940 provision.go:87] duration metric: took 271.351496ms to configureAuth
	I1216 04:26:31.475350    9940 buildroot.go:189] setting minikube options for container-runtime
	I1216 04:26:31.475522    9940 config.go:182] Loaded profile config "addons-153066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:26:31.478363    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.478758    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.478807    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.478985    9940 main.go:143] libmachine: Using SSH client type: native
	I1216 04:26:31.479213    9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1216 04:26:31.479236    9940 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 04:26:31.717516    9940 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 04:26:31.717545    9940 machine.go:97] duration metric: took 877.038533ms to provisionDockerMachine
	I1216 04:26:31.717559    9940 client.go:176] duration metric: took 19.234961055s to LocalClient.Create
	I1216 04:26:31.717578    9940 start.go:167] duration metric: took 19.23501183s to libmachine.API.Create "addons-153066"
	I1216 04:26:31.717588    9940 start.go:293] postStartSetup for "addons-153066" (driver="kvm2")
	I1216 04:26:31.717600    9940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 04:26:31.717656    9940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 04:26:31.720287    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.720673    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.720696    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.720857    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:31.803365    9940 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 04:26:31.808013    9940 info.go:137] Remote host: Buildroot 2025.02
	I1216 04:26:31.808039    9940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/addons for local assets ...
	I1216 04:26:31.808116    9940 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/files for local assets ...
	I1216 04:26:31.808139    9940 start.go:296] duration metric: took 90.54538ms for postStartSetup
	I1216 04:26:31.821759    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.822167    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.822190    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.822446    9940 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/config.json ...
	I1216 04:26:31.828927    9940 start.go:128] duration metric: took 19.348094211s to createHost
	I1216 04:26:31.831328    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.831725    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.831753    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.831965    9940 main.go:143] libmachine: Using SSH client type: native
	I1216 04:26:31.832227    9940 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1216 04:26:31.832244    9940 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 04:26:31.937324    9940 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765859191.900097585
	
	I1216 04:26:31.937350    9940 fix.go:216] guest clock: 1765859191.900097585
	I1216 04:26:31.937360    9940 fix.go:229] Guest: 2025-12-16 04:26:31.900097585 +0000 UTC Remote: 2025-12-16 04:26:31.82894645 +0000 UTC m=+19.439554359 (delta=71.151135ms)
	I1216 04:26:31.937391    9940 fix.go:200] guest clock delta is within tolerance: 71.151135ms
	I1216 04:26:31.937396    9940 start.go:83] releasing machines lock for "addons-153066", held for 19.456649812s
	I1216 04:26:31.939797    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.940168    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.940188    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.940665    9940 ssh_runner.go:195] Run: cat /version.json
	I1216 04:26:31.940740    9940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 04:26:31.943751    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.944032    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.944187    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.944215    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.944349    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:31.944526    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:31.944561    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:31.944724    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:32.029876    9940 ssh_runner.go:195] Run: systemctl --version
	I1216 04:26:32.059197    9940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 04:26:32.712109    9940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 04:26:32.718819    9940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 04:26:32.718872    9940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 04:26:32.742804    9940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 04:26:32.742827    9940 start.go:496] detecting cgroup driver to use...
	I1216 04:26:32.742896    9940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 04:26:32.764024    9940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 04:26:32.780817    9940 docker.go:218] disabling cri-docker service (if available) ...
	I1216 04:26:32.780871    9940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 04:26:32.797826    9940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 04:26:32.813247    9940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 04:26:32.957205    9940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 04:26:33.165397    9940 docker.go:234] disabling docker service ...
	I1216 04:26:33.165471    9940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 04:26:33.183025    9940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 04:26:33.198643    9940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 04:26:33.354740    9940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 04:26:33.498644    9940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 04:26:33.514383    9940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 04:26:33.541164    9940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 04:26:33.541220    9940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:26:33.554012    9940 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 04:26:33.554070    9940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:26:33.566596    9940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:26:33.578561    9940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:26:33.590294    9940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 04:26:33.602835    9940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:26:33.615271    9940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:26:33.634945    9940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 04:26:33.646443    9940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 04:26:33.656320    9940 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 04:26:33.656362    9940 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 04:26:33.676450    9940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 04:26:33.688552    9940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:26:33.832431    9940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 04:26:33.939250    9940 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 04:26:33.939345    9940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 04:26:33.944839    9940 start.go:564] Will wait 60s for crictl version
	I1216 04:26:33.944920    9940 ssh_runner.go:195] Run: which crictl
	I1216 04:26:33.948980    9940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 04:26:33.985521    9940 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 04:26:33.985672    9940 ssh_runner.go:195] Run: crio --version
	I1216 04:26:34.014844    9940 ssh_runner.go:195] Run: crio --version
	I1216 04:26:34.044955    9940 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1216 04:26:34.048412    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:34.048743    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:34.048786    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:34.048969    9940 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 04:26:34.053408    9940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:26:34.069056    9940 kubeadm.go:884] updating cluster {Name:addons-153066 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 04:26:34.069152    9940 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:26:34.069206    9940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:26:34.104023    9940 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1216 04:26:34.104084    9940 ssh_runner.go:195] Run: which lz4
	I1216 04:26:34.108372    9940 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 04:26:34.112962    9940 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 04:26:34.112989    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1216 04:26:35.326872    9940 crio.go:462] duration metric: took 1.218523885s to copy over tarball
	I1216 04:26:35.326941    9940 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 04:26:36.775076    9940 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.44810628s)
	I1216 04:26:36.775102    9940 crio.go:469] duration metric: took 1.44820094s to extract the tarball
	I1216 04:26:36.775112    9940 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 04:26:36.810756    9940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 04:26:36.851866    9940 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 04:26:36.851886    9940 cache_images.go:86] Images are preloaded, skipping loading
	I1216 04:26:36.851894    9940 kubeadm.go:935] updating node { 192.168.39.189 8443 v1.34.2 crio true true} ...
	I1216 04:26:36.851987    9940 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-153066 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 04:26:36.852063    9940 ssh_runner.go:195] Run: crio config
	I1216 04:26:36.897204    9940 cni.go:84] Creating CNI manager for ""
	I1216 04:26:36.897229    9940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 04:26:36.897246    9940 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 04:26:36.897272    9940 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-153066 NodeName:addons-153066 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 04:26:36.897418    9940 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-153066"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.189"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 04:26:36.897490    9940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 04:26:36.909742    9940 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 04:26:36.909822    9940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 04:26:36.921184    9940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1216 04:26:36.941873    9940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 04:26:36.962574    9940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1216 04:26:36.983619    9940 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I1216 04:26:36.988196    9940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.189	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 04:26:37.002842    9940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:26:37.143893    9940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:26:37.163929    9940 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066 for IP: 192.168.39.189
	I1216 04:26:37.163958    9940 certs.go:195] generating shared ca certs ...
	I1216 04:26:37.163980    9940 certs.go:227] acquiring lock for ca certs: {Name:mkeb038c86653b42975db55bc13142d606c3d109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.164172    9940 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key
	I1216 04:26:37.325901    9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt ...
	I1216 04:26:37.325930    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt: {Name:mkb298cbd6f2a662a2ef54c0f206ce67489c4c74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.326098    9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key ...
	I1216 04:26:37.326109    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key: {Name:mk2ea7454f689a63b0191fe48cc639ae4d6c694d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.326184    9940 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key
	I1216 04:26:37.356677    9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt ...
	I1216 04:26:37.356702    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt: {Name:mk9837582ee8f37268e0fda446ec14b506c621b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.356838    9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key ...
	I1216 04:26:37.356850    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key: {Name:mk5a6dbe24498aa7e3157b178a702ef9442795b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.356914    9940 certs.go:257] generating profile certs ...
	I1216 04:26:37.356981    9940 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.key
	I1216 04:26:37.356997    9940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt with IP's: []
	I1216 04:26:37.589063    9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt ...
	I1216 04:26:37.589091    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: {Name:mkd597a69d61b484fd3d6ce7897d18f14f48dc61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.589285    9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.key ...
	I1216 04:26:37.589299    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.key: {Name:mke91f5e16b918a39d7606ef726c59a7541b4091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.589889    9940 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key.e0ee3c2c
	I1216 04:26:37.589913    9940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt.e0ee3c2c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.189]
	I1216 04:26:37.636033    9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt.e0ee3c2c ...
	I1216 04:26:37.636061    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt.e0ee3c2c: {Name:mk10338343b5e41315c0439a0a3bc6d65d053dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.636242    9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key.e0ee3c2c ...
	I1216 04:26:37.636257    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key.e0ee3c2c: {Name:mk0aff05bc76f89ebebf42b652565025805a9bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.636372    9940 certs.go:382] copying /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt.e0ee3c2c -> /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt
	I1216 04:26:37.636449    9940 certs.go:386] copying /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key.e0ee3c2c -> /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key
	I1216 04:26:37.636497    9940 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.key
	I1216 04:26:37.636514    9940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.crt with IP's: []
	I1216 04:26:37.654727    9940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.crt ...
	I1216 04:26:37.654747    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.crt: {Name:mka0ad637d45ce84d377e79efcf58bb26360f7f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.654918    9940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.key ...
	I1216 04:26:37.654934    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.key: {Name:mkcd7cec00b5145ff289ee427ce1adf9fa8341c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:37.655138    9940 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 04:26:37.655173    9940 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem (1082 bytes)
	I1216 04:26:37.655200    9940 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem (1123 bytes)
	I1216 04:26:37.655224    9940 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem (1675 bytes)
	I1216 04:26:37.655756    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 04:26:37.686969    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 04:26:37.716845    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 04:26:37.746408    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 04:26:37.775533    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 04:26:37.804706    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 04:26:37.834021    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 04:26:37.871887    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 04:26:37.905376    9940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 04:26:37.936298    9940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 04:26:37.956342    9940 ssh_runner.go:195] Run: openssl version
	I1216 04:26:37.962738    9940 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:26:37.973506    9940 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 04:26:37.984247    9940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:26:37.989438    9940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:26:37.989480    9940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 04:26:37.996227    9940 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 04:26:38.006746    9940 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 04:26:38.017730    9940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 04:26:38.022446    9940 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 04:26:38.022491    9940 kubeadm.go:401] StartCluster: {Name:addons-153066 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-153066 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:26:38.022571    9940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 04:26:38.022633    9940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 04:26:38.055305    9940 cri.go:89] found id: ""
	I1216 04:26:38.055391    9940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 04:26:38.067112    9940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 04:26:38.078619    9940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 04:26:38.089225    9940 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 04:26:38.089241    9940 kubeadm.go:158] found existing configuration files:
	
	I1216 04:26:38.089283    9940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 04:26:38.099396    9940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 04:26:38.099464    9940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 04:26:38.110843    9940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 04:26:38.121147    9940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 04:26:38.121182    9940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 04:26:38.131451    9940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 04:26:38.141465    9940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 04:26:38.141531    9940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 04:26:38.152415    9940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 04:26:38.162496    9940 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 04:26:38.162548    9940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 04:26:38.172971    9940 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 04:26:38.318478    9940 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 04:26:51.244458    9940 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 04:26:51.244534    9940 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 04:26:51.244637    9940 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 04:26:51.244788    9940 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 04:26:51.244903    9940 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 04:26:51.244955    9940 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 04:26:51.247059    9940 out.go:252]   - Generating certificates and keys ...
	I1216 04:26:51.247138    9940 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 04:26:51.247226    9940 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 04:26:51.247309    9940 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 04:26:51.247358    9940 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 04:26:51.247433    9940 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 04:26:51.247497    9940 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 04:26:51.247548    9940 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 04:26:51.247640    9940 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-153066 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I1216 04:26:51.247716    9940 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 04:26:51.247886    9940 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-153066 localhost] and IPs [192.168.39.189 127.0.0.1 ::1]
	I1216 04:26:51.247980    9940 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 04:26:51.248069    9940 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 04:26:51.248141    9940 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 04:26:51.248193    9940 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 04:26:51.248235    9940 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 04:26:51.248295    9940 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 04:26:51.248337    9940 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 04:26:51.248410    9940 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 04:26:51.248478    9940 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 04:26:51.248552    9940 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 04:26:51.248665    9940 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 04:26:51.250907    9940 out.go:252]   - Booting up control plane ...
	I1216 04:26:51.251011    9940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 04:26:51.251119    9940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 04:26:51.251217    9940 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 04:26:51.251381    9940 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 04:26:51.251504    9940 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 04:26:51.251666    9940 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 04:26:51.251744    9940 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 04:26:51.251791    9940 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 04:26:51.251910    9940 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 04:26:51.252023    9940 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 04:26:51.252116    9940 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001332118s
	I1216 04:26:51.252253    9940 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 04:26:51.252356    9940 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.189:8443/livez
	I1216 04:26:51.252467    9940 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 04:26:51.252537    9940 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 04:26:51.252596    9940 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.035648545s
	I1216 04:26:51.252678    9940 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.732859848s
	I1216 04:26:51.252766    9940 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502235974s
	I1216 04:26:51.252902    9940 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 04:26:51.253048    9940 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 04:26:51.253137    9940 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 04:26:51.253347    9940 kubeadm.go:319] [mark-control-plane] Marking the node addons-153066 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 04:26:51.253417    9940 kubeadm.go:319] [bootstrap-token] Using token: s9emtg.znl4zc5yufahvhxg
	I1216 04:26:51.255003    9940 out.go:252]   - Configuring RBAC rules ...
	I1216 04:26:51.255100    9940 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 04:26:51.255186    9940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 04:26:51.255345    9940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 04:26:51.255505    9940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 04:26:51.255628    9940 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 04:26:51.255725    9940 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 04:26:51.255862    9940 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 04:26:51.255928    9940 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 04:26:51.256003    9940 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 04:26:51.256011    9940 kubeadm.go:319] 
	I1216 04:26:51.256103    9940 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 04:26:51.256117    9940 kubeadm.go:319] 
	I1216 04:26:51.256212    9940 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 04:26:51.256223    9940 kubeadm.go:319] 
	I1216 04:26:51.256262    9940 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 04:26:51.256419    9940 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 04:26:51.256491    9940 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 04:26:51.256501    9940 kubeadm.go:319] 
	I1216 04:26:51.256572    9940 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 04:26:51.256581    9940 kubeadm.go:319] 
	I1216 04:26:51.256648    9940 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 04:26:51.256656    9940 kubeadm.go:319] 
	I1216 04:26:51.256729    9940 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 04:26:51.256853    9940 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 04:26:51.256945    9940 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 04:26:51.256956    9940 kubeadm.go:319] 
	I1216 04:26:51.257059    9940 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 04:26:51.257163    9940 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 04:26:51.257171    9940 kubeadm.go:319] 
	I1216 04:26:51.257294    9940 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s9emtg.znl4zc5yufahvhxg \
	I1216 04:26:51.257387    9940 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6b10d8aa5d34951ef0c68d93c25038a5fa50fdf938787206894299e135264d81 \
	I1216 04:26:51.257409    9940 kubeadm.go:319] 	--control-plane 
	I1216 04:26:51.257415    9940 kubeadm.go:319] 
	I1216 04:26:51.257508    9940 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 04:26:51.257524    9940 kubeadm.go:319] 
	I1216 04:26:51.257630    9940 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s9emtg.znl4zc5yufahvhxg \
	I1216 04:26:51.257781    9940 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6b10d8aa5d34951ef0c68d93c25038a5fa50fdf938787206894299e135264d81 
	I1216 04:26:51.257797    9940 cni.go:84] Creating CNI manager for ""
	I1216 04:26:51.257804    9940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 04:26:51.259690    9940 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 04:26:51.260727    9940 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 04:26:51.274427    9940 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 04:26:51.296193    9940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 04:26:51.296294    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:51.296325    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-153066 minikube.k8s.io/updated_at=2025_12_16T04_26_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f minikube.k8s.io/name=addons-153066 minikube.k8s.io/primary=true
	I1216 04:26:51.452890    9940 ops.go:34] apiserver oom_adj: -16
	I1216 04:26:51.452901    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:51.953823    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:52.453292    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:52.952971    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:53.453237    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:53.953981    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:54.453616    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:54.954040    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:55.453480    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:55.953936    9940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 04:26:56.070327    9940 kubeadm.go:1114] duration metric: took 4.774098956s to wait for elevateKubeSystemPrivileges
	I1216 04:26:56.070375    9940 kubeadm.go:403] duration metric: took 18.047885467s to StartCluster
	I1216 04:26:56.070397    9940 settings.go:142] acquiring lock: {Name:mk934ce4e0f52c59044080dacae6bea8d1937fab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:56.070571    9940 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 04:26:56.071174    9940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/kubeconfig: {Name:mk2e0aa2a9ecd47e0407b52e183f6fd294eb595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:56.071409    9940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 04:26:56.071436    9940 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 04:26:56.071639    9940 config.go:182] Loaded profile config "addons-153066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:26:56.071558    9940 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 04:26:56.071725    9940 addons.go:70] Setting default-storageclass=true in profile "addons-153066"
	I1216 04:26:56.071742    9940 addons.go:70] Setting gcp-auth=true in profile "addons-153066"
	I1216 04:26:56.071750    9940 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-153066"
	I1216 04:26:56.071747    9940 addons.go:70] Setting cloud-spanner=true in profile "addons-153066"
	I1216 04:26:56.071768    9940 addons.go:70] Setting ingress=true in profile "addons-153066"
	I1216 04:26:56.071798    9940 addons.go:239] Setting addon ingress=true in "addons-153066"
	I1216 04:26:56.071803    9940 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-153066"
	I1216 04:26:56.071761    9940 mustload.go:66] Loading cluster: addons-153066
	I1216 04:26:56.071824    9940 addons.go:70] Setting ingress-dns=true in profile "addons-153066"
	I1216 04:26:56.071829    9940 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-153066"
	I1216 04:26:56.071837    9940 addons.go:239] Setting addon ingress-dns=true in "addons-153066"
	I1216 04:26:56.071841    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.071871    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.071890    9940 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-153066"
	I1216 04:26:56.071905    9940 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-153066"
	I1216 04:26:56.071997    9940 config.go:182] Loaded profile config "addons-153066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:26:56.072020    9940 addons.go:70] Setting metrics-server=true in profile "addons-153066"
	I1216 04:26:56.072039    9940 addons.go:239] Setting addon metrics-server=true in "addons-153066"
	I1216 04:26:56.072067    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.072344    9940 addons.go:70] Setting inspektor-gadget=true in profile "addons-153066"
	I1216 04:26:56.072364    9940 addons.go:239] Setting addon inspektor-gadget=true in "addons-153066"
	I1216 04:26:56.072405    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.072805    9940 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-153066"
	I1216 04:26:56.072828    9940 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-153066"
	I1216 04:26:56.072851    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.072874    9940 addons.go:70] Setting registry-creds=true in profile "addons-153066"
	I1216 04:26:56.072893    9940 addons.go:239] Setting addon registry-creds=true in "addons-153066"
	I1216 04:26:56.072921    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.071730    9940 addons.go:70] Setting yakd=true in profile "addons-153066"
	I1216 04:26:56.073103    9940 addons.go:239] Setting addon yakd=true in "addons-153066"
	I1216 04:26:56.073129    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.073156    9940 addons.go:70] Setting volcano=true in profile "addons-153066"
	I1216 04:26:56.073172    9940 addons.go:239] Setting addon volcano=true in "addons-153066"
	I1216 04:26:56.073194    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.071811    9940 addons.go:239] Setting addon cloud-spanner=true in "addons-153066"
	I1216 04:26:56.073639    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.071805    9940 addons.go:70] Setting registry=true in profile "addons-153066"
	I1216 04:26:56.073708    9940 addons.go:239] Setting addon registry=true in "addons-153066"
	I1216 04:26:56.073732    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.073871    9940 addons.go:70] Setting storage-provisioner=true in profile "addons-153066"
	I1216 04:26:56.073896    9940 addons.go:239] Setting addon storage-provisioner=true in "addons-153066"
	I1216 04:26:56.073922    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.071874    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.074125    9940 addons.go:70] Setting volumesnapshots=true in profile "addons-153066"
	I1216 04:26:56.074129    9940 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-153066"
	I1216 04:26:56.074142    9940 addons.go:239] Setting addon volumesnapshots=true in "addons-153066"
	I1216 04:26:56.074171    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.074206    9940 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-153066"
	I1216 04:26:56.074232    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.074550    9940 out.go:179] * Verifying Kubernetes components...
	I1216 04:26:56.076179    9940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 04:26:56.079131    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.080404    9940 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-153066"
	I1216 04:26:56.080436    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.080408    9940 addons.go:239] Setting addon default-storageclass=true in "addons-153066"
	I1216 04:26:56.080521    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:26:56.081518    9940 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 04:26:56.081518    9940 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 04:26:56.081523    9940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 04:26:56.082397    9940 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	W1216 04:26:56.082694    9940 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 04:26:56.083238    9940 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 04:26:56.083253    9940 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:26:56.083265    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 04:26:56.083255    9940 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 04:26:56.083949    9940 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 04:26:56.084014    9940 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:26:56.084389    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 04:26:56.084789    9940 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 04:26:56.084829    9940 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 04:26:56.084803    9940 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 04:26:56.084806    9940 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 04:26:56.084792    9940 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 04:26:56.084873    9940 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 04:26:56.085532    9940 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:26:56.086073    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 04:26:56.085516    9940 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 04:26:56.085558    9940 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 04:26:56.085530    9940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:26:56.086013    9940 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 04:26:56.086921    9940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 04:26:56.086242    9940 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:26:56.087375    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 04:26:56.087037    9940 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 04:26:56.087648    9940 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 04:26:56.087044    9940 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 04:26:56.087766    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 04:26:56.087907    9940 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:26:56.087919    9940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:26:56.087924    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 04:26:56.087931    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 04:26:56.088668    9940 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 04:26:56.088675    9940 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 04:26:56.088675    9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 04:26:56.088751    9940 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 04:26:56.089646    9940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:26:56.090632    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.090728    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.090787    9940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 04:26:56.090836    9940 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 04:26:56.091245    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 04:26:56.091568    9940 out.go:179]   - Using image docker.io/busybox:stable
	I1216 04:26:56.091703    9940 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:26:56.091718    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 04:26:56.091973    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.092405    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.092438    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.092524    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.092583    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.093209    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.093209    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.093306    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.093365    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.094553    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.094802    9940 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:26:56.094817    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 04:26:56.095912    9940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 04:26:56.095949    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.097226    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.097263    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.098132    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.098337    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.098414    9940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 04:26:56.098439    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.099118    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.099452    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.099484    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.099594    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.099630    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.099501    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.099751    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.100143    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.100455    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.100622    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.100713    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.100792    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.100852    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.100964    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.100980    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.100995    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.101020    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.101283    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.101387    9940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 04:26:56.101497    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.101536    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.101902    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.101975    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.102003    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.102015    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.102037    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.102372    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.102408    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.102646    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.102909    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.102943    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.103063    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.103090    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.103125    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.103297    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.103396    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.103839    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.103861    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.104026    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:26:56.105060    9940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 04:26:56.106374    9940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 04:26:56.107612    9940 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 04:26:56.108716    9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 04:26:56.108750    9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 04:26:56.111714    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.112176    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:26:56.112210    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:26:56.112376    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	W1216 04:26:56.423168    9940 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:57314->192.168.39.189:22: read: connection reset by peer
	I1216 04:26:56.423209    9940 retry.go:31] will retry after 132.583334ms: ssh: handshake failed: read tcp 192.168.39.1:57314->192.168.39.189:22: read: connection reset by peer
	I1216 04:26:56.744464    9940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 04:26:56.744490    9940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 04:26:56.904137    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 04:26:56.907534    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 04:26:56.935654    9940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 04:26:56.936165    9940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 04:26:56.968921    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 04:26:56.991535    9940 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 04:26:56.991558    9940 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 04:26:57.004009    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 04:26:57.012508    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 04:26:57.028306    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 04:26:57.034563    9940 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 04:26:57.034582    9940 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 04:26:57.060467    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 04:26:57.077665    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 04:26:57.099842    9940 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 04:26:57.099879    9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 04:26:57.105430    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 04:26:57.144375    9940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 04:26:57.144399    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 04:26:57.210328    9940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 04:26:57.210362    9940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 04:26:57.426856    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 04:26:57.616728    9940 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 04:26:57.616751    9940 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 04:26:57.784605    9940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 04:26:57.784640    9940 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 04:26:57.788618    9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 04:26:57.788641    9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 04:26:57.806965    9940 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:26:57.806988    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 04:26:57.852050    9940 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 04:26:57.852076    9940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 04:26:58.078661    9940 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 04:26:58.078687    9940 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 04:26:58.205646    9940 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:26:58.205678    9940 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 04:26:58.215452    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 04:26:58.238971    9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 04:26:58.239003    9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 04:26:58.244015    9940 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 04:26:58.244040    9940 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 04:26:58.391330    9940 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:26:58.391349    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 04:26:58.443818    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 04:26:58.544295    9940 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:26:58.544319    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 04:26:58.581020    9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 04:26:58.581046    9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 04:26:58.765379    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 04:26:59.009290    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:26:59.069270    9940 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 04:26:59.069301    9940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 04:26:59.283310    9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 04:26:59.283333    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 04:26:59.577950    9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 04:26:59.577987    9940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 04:26:59.847299    9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 04:26:59.847324    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 04:27:00.102895    9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 04:27:00.102917    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 04:27:00.492114    9940 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:27:00.492138    9940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 04:27:00.895558    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 04:27:02.010407    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.106231897s)
	I1216 04:27:02.010450    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.102885366s)
	I1216 04:27:02.010503    9940 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.074805991s)
	I1216 04:27:02.010554    9940 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.074364287s)
	I1216 04:27:02.010582    9940 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1216 04:27:02.010623    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.041673975s)
	I1216 04:27:02.010694    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.006655948s)
	I1216 04:27:02.011405    9940 node_ready.go:35] waiting up to 6m0s for node "addons-153066" to be "Ready" ...
	I1216 04:27:02.112422    9940 node_ready.go:49] node "addons-153066" is "Ready"
	I1216 04:27:02.112457    9940 node_ready.go:38] duration metric: took 101.002439ms for node "addons-153066" to be "Ready" ...
	I1216 04:27:02.112473    9940 api_server.go:52] waiting for apiserver process to appear ...
	I1216 04:27:02.112525    9940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:27:02.517553    9940 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-153066" context rescaled to 1 replicas
	I1216 04:27:02.728185    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.715638367s)
	I1216 04:27:02.728240    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.699900737s)
	I1216 04:27:02.873813    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.813304765s)
	I1216 04:27:02.873881    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.796184218s)
	I1216 04:27:02.873961    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.768496288s)
	I1216 04:27:03.502735    9940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 04:27:03.505452    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:27:03.505875    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:27:03.505899    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:27:03.506055    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:27:03.941716    9940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 04:27:04.070804    9940 addons.go:239] Setting addon gcp-auth=true in "addons-153066"
	I1216 04:27:04.070876    9940 host.go:66] Checking if "addons-153066" exists ...
	I1216 04:27:04.072802    9940 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 04:27:04.075287    9940 main.go:143] libmachine: domain addons-153066 has defined MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:27:04.075723    9940 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c6:57:6e", ip: ""} in network mk-addons-153066: {Iface:virbr1 ExpiryTime:2025-12-16 05:26:27 +0000 UTC Type:0 Mac:52:54:00:c6:57:6e Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:addons-153066 Clientid:01:52:54:00:c6:57:6e}
	I1216 04:27:04.075745    9940 main.go:143] libmachine: domain addons-153066 has defined IP address 192.168.39.189 and MAC address 52:54:00:c6:57:6e in network mk-addons-153066
	I1216 04:27:04.075927    9940 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/addons-153066/id_rsa Username:docker}
	I1216 04:27:04.714121    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.287225247s)
	I1216 04:27:04.714157    9940 addons.go:495] Verifying addon ingress=true in "addons-153066"
	I1216 04:27:04.714181    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.498698095s)
	I1216 04:27:04.714209    9940 addons.go:495] Verifying addon registry=true in "addons-153066"
	I1216 04:27:04.714284    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.27042975s)
	I1216 04:27:04.714313    9940 addons.go:495] Verifying addon metrics-server=true in "addons-153066"
	I1216 04:27:04.714372    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.948966868s)
	I1216 04:27:04.715764    9940 out.go:179] * Verifying ingress addon...
	I1216 04:27:04.715794    9940 out.go:179] * Verifying registry addon...
	I1216 04:27:04.716431    9940 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-153066 service yakd-dashboard -n yakd-dashboard
	
	I1216 04:27:04.717680    9940 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 04:27:04.717886    9940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 04:27:04.816536    9940 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 04:27:04.816563    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:04.816738    9940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 04:27:04.816760    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:05.236884    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.227548553s)
	W1216 04:27:05.236937    9940 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:27:05.236964    9940 retry.go:31] will retry after 318.718007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 04:27:05.246600    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:05.246744    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:05.555903    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 04:27:05.733789    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:05.733975    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:06.068386    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.172771145s)
	I1216 04:27:06.068441    9940 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-153066"
	I1216 04:27:06.068481    9940 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.995648502s)
	I1216 04:27:06.068410    9940 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.955863538s)
	I1216 04:27:06.068531    9940 api_server.go:72] duration metric: took 9.997048865s to wait for apiserver process to appear ...
	I1216 04:27:06.068548    9940 api_server.go:88] waiting for apiserver healthz status ...
	I1216 04:27:06.068578    9940 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I1216 04:27:06.070047    9940 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 04:27:06.070063    9940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 04:27:06.071958    9940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 04:27:06.073151    9940 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 04:27:06.074111    9940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 04:27:06.074127    9940 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 04:27:06.090876    9940 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I1216 04:27:06.092811    9940 api_server.go:141] control plane version: v1.34.2
	I1216 04:27:06.092835    9940 api_server.go:131] duration metric: took 24.279496ms to wait for apiserver health ...
	I1216 04:27:06.092846    9940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 04:27:06.106737    9940 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 04:27:06.106757    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:06.108167    9940 system_pods.go:59] 20 kube-system pods found
	I1216 04:27:06.108201    9940 system_pods.go:61] "amd-gpu-device-plugin-hhs5c" [7c605597-4044-4415-a423-ac0bc2d63d1f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:27:06.108209    9940 system_pods.go:61] "coredns-66bc5c9577-jbx8s" [0709930e-115a-4d78-b4bf-514176ebc1dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:27:06.108219    9940 system_pods.go:61] "coredns-66bc5c9577-k5hzj" [c86aac94-7319-4717-b09c-4c5ce48d083b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:27:06.108229    9940 system_pods.go:61] "csi-hostpath-attacher-0" [c470943f-0e67-4ab6-839b-9373ba7a9393] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:27:06.108235    9940 system_pods.go:61] "csi-hostpath-resizer-0" [bff855f5-caa5-4b55-a322-a8296584227b] Pending
	I1216 04:27:06.108241    9940 system_pods.go:61] "csi-hostpathplugin-82zcc" [e35aa6fc-ceba-4edc-8c51-bbee1dd678e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:27:06.108247    9940 system_pods.go:61] "etcd-addons-153066" [58bb4ccb-f65f-48ee-bdd8-11f5d4ab35d6] Running
	I1216 04:27:06.108251    9940 system_pods.go:61] "kube-apiserver-addons-153066" [65425427-5f1b-456d-917f-421ffab25e59] Running
	I1216 04:27:06.108255    9940 system_pods.go:61] "kube-controller-manager-addons-153066" [c827dadd-9054-4d66-a51f-ca33293eeed4] Running
	I1216 04:27:06.108266    9940 system_pods.go:61] "kube-ingress-dns-minikube" [becea7ef-45d0-4bec-8470-fe1f574391a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:27:06.108271    9940 system_pods.go:61] "kube-proxy-h5nhv" [98c8054c-fb42-44bb-96c3-b9e2b534f591] Running
	I1216 04:27:06.108274    9940 system_pods.go:61] "kube-scheduler-addons-153066" [553dd44f-dd6d-44f2-b24e-fd2ac993b9d6] Running
	I1216 04:27:06.108278    9940 system_pods.go:61] "metrics-server-85b7d694d7-qm9rk" [0ea4c9ef-e70d-4d40-8e23-271dbeeb59b9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:27:06.108284    9940 system_pods.go:61] "nvidia-device-plugin-daemonset-z4dn4" [3c096eaf-758d-432e-81f4-c8dfdd7b23cb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:27:06.108291    9940 system_pods.go:61] "registry-6b586f9694-bxf9q" [afd4c327-e7bf-4429-ad65-493431f56200] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:27:06.108296    9940 system_pods.go:61] "registry-creds-764b6fb674-q9m7r" [5c769b6a-2a35-4bc3-8118-0ecb8c704bcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:27:06.108303    9940 system_pods.go:61] "registry-proxy-pbkbs" [0bced886-f1b7-415e-91d0-5f533bcfe8c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:27:06.108309    9940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d5zsw" [66d92118-0ca5-449c-8de3-9d6e936d4145] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:27:06.108316    9940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fmtr4" [b2b180dc-7184-400d-acd5-364d26ca2e15] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:27:06.108320    9940 system_pods.go:61] "storage-provisioner" [261052b5-937f-4f46-8238-ab5a0913c588] Running
	I1216 04:27:06.108328    9940 system_pods.go:74] duration metric: took 15.476535ms to wait for pod list to return data ...
	I1216 04:27:06.108338    9940 default_sa.go:34] waiting for default service account to be created ...
	I1216 04:27:06.116992    9940 default_sa.go:45] found service account: "default"
	I1216 04:27:06.117014    9940 default_sa.go:55] duration metric: took 8.667417ms for default service account to be created ...
	I1216 04:27:06.117026    9940 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 04:27:06.147263    9940 system_pods.go:86] 20 kube-system pods found
	I1216 04:27:06.147306    9940 system_pods.go:89] "amd-gpu-device-plugin-hhs5c" [7c605597-4044-4415-a423-ac0bc2d63d1f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 04:27:06.147320    9940 system_pods.go:89] "coredns-66bc5c9577-jbx8s" [0709930e-115a-4d78-b4bf-514176ebc1dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:27:06.147334    9940 system_pods.go:89] "coredns-66bc5c9577-k5hzj" [c86aac94-7319-4717-b09c-4c5ce48d083b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 04:27:06.147350    9940 system_pods.go:89] "csi-hostpath-attacher-0" [c470943f-0e67-4ab6-839b-9373ba7a9393] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 04:27:06.147358    9940 system_pods.go:89] "csi-hostpath-resizer-0" [bff855f5-caa5-4b55-a322-a8296584227b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 04:27:06.147938    9940 system_pods.go:89] "csi-hostpathplugin-82zcc" [e35aa6fc-ceba-4edc-8c51-bbee1dd678e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 04:27:06.147962    9940 system_pods.go:89] "etcd-addons-153066" [58bb4ccb-f65f-48ee-bdd8-11f5d4ab35d6] Running
	I1216 04:27:06.147970    9940 system_pods.go:89] "kube-apiserver-addons-153066" [65425427-5f1b-456d-917f-421ffab25e59] Running
	I1216 04:27:06.147975    9940 system_pods.go:89] "kube-controller-manager-addons-153066" [c827dadd-9054-4d66-a51f-ca33293eeed4] Running
	I1216 04:27:06.147986    9940 system_pods.go:89] "kube-ingress-dns-minikube" [becea7ef-45d0-4bec-8470-fe1f574391a6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 04:27:06.147991    9940 system_pods.go:89] "kube-proxy-h5nhv" [98c8054c-fb42-44bb-96c3-b9e2b534f591] Running
	I1216 04:27:06.148000    9940 system_pods.go:89] "kube-scheduler-addons-153066" [553dd44f-dd6d-44f2-b24e-fd2ac993b9d6] Running
	I1216 04:27:06.148008    9940 system_pods.go:89] "metrics-server-85b7d694d7-qm9rk" [0ea4c9ef-e70d-4d40-8e23-271dbeeb59b9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 04:27:06.148019    9940 system_pods.go:89] "nvidia-device-plugin-daemonset-z4dn4" [3c096eaf-758d-432e-81f4-c8dfdd7b23cb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 04:27:06.148028    9940 system_pods.go:89] "registry-6b586f9694-bxf9q" [afd4c327-e7bf-4429-ad65-493431f56200] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 04:27:06.148037    9940 system_pods.go:89] "registry-creds-764b6fb674-q9m7r" [5c769b6a-2a35-4bc3-8118-0ecb8c704bcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 04:27:06.148044    9940 system_pods.go:89] "registry-proxy-pbkbs" [0bced886-f1b7-415e-91d0-5f533bcfe8c0] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 04:27:06.148057    9940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d5zsw" [66d92118-0ca5-449c-8de3-9d6e936d4145] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:27:06.148066    9940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fmtr4" [b2b180dc-7184-400d-acd5-364d26ca2e15] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 04:27:06.148073    9940 system_pods.go:89] "storage-provisioner" [261052b5-937f-4f46-8238-ab5a0913c588] Running
	I1216 04:27:06.148083    9940 system_pods.go:126] duration metric: took 31.049946ms to wait for k8s-apps to be running ...
	I1216 04:27:06.148097    9940 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 04:27:06.148157    9940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:27:06.227563    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:06.229510    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:06.233951    9940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 04:27:06.233973    9940 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 04:27:06.372451    9940 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:27:06.372483    9940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 04:27:06.463549    9940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 04:27:06.579097    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:06.729528    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:06.730640    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:07.079988    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:07.224305    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:07.224619    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:07.586047    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:07.726254    9940 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.57806945s)
	I1216 04:27:07.726288    9940 system_svc.go:56] duration metric: took 1.578188264s WaitForService to wait for kubelet
	I1216 04:27:07.726299    9940 kubeadm.go:587] duration metric: took 11.654816881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 04:27:07.726320    9940 node_conditions.go:102] verifying NodePressure condition ...
	I1216 04:27:07.726257    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.170313303s)
	I1216 04:27:07.735589    9940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 04:27:07.735628    9940 node_conditions.go:123] node cpu capacity is 2
	I1216 04:27:07.735646    9940 node_conditions.go:105] duration metric: took 9.319252ms to run NodePressure ...
	I1216 04:27:07.735660    9940 start.go:242] waiting for startup goroutines ...
	I1216 04:27:07.737610    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:07.738700    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:07.842357    9940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.378763722s)
	I1216 04:27:07.843404    9940 addons.go:495] Verifying addon gcp-auth=true in "addons-153066"
	I1216 04:27:07.844974    9940 out.go:179] * Verifying gcp-auth addon...
	I1216 04:27:07.847187    9940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 04:27:07.860248    9940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 04:27:07.860263    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:08.081522    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:08.229931    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:08.229975    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:08.354239    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:08.585379    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:08.725689    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:08.726166    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:08.851479    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:09.081006    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:09.223323    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:09.224480    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:09.351925    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:09.580385    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:09.725546    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:09.727097    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:09.854767    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:10.077526    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:10.221816    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:10.221911    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:10.352037    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:10.576414    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:10.722668    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:10.723007    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:10.877245    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:11.079244    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:11.225203    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:11.228072    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:11.350727    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:11.577161    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:11.728047    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:11.728614    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:11.851574    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:12.076097    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:12.220907    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:12.222324    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:12.350214    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:12.575932    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:12.723074    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:12.723240    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:12.850851    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:13.075863    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:13.220817    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:13.221612    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:13.350670    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:13.576941    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:13.725823    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:13.725993    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:13.853435    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:14.076182    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:14.222838    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:14.225256    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:14.352084    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:14.576599    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:14.722332    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:14.723211    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:14.850564    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:15.076790    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:15.220742    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:15.220985    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:15.351581    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:15.577401    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:15.722521    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:15.723609    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:15.853013    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:16.077400    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:16.221762    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:16.221994    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:16.352150    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:16.575739    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:16.721286    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:16.721893    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:16.852680    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:17.078497    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:17.225595    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:17.225759    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:17.351626    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:17.576240    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:17.724349    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:17.726740    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:17.852188    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:18.077969    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:18.221321    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:18.224877    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:18.351719    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:18.576673    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:18.724493    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:18.725497    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:18.850126    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:19.082174    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:19.223797    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:19.225352    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:19.353700    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:19.576626    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:19.722718    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:19.723219    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:19.886622    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:20.077427    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:20.224462    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:20.226403    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:20.353518    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:20.578299    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:20.723960    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:20.724361    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:20.855977    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:21.075324    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:21.222389    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:21.224041    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:21.706026    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:21.706056    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:21.723412    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:21.723989    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:21.853250    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:22.077245    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:22.222950    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:22.224807    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:22.351170    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:22.578517    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:22.723743    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:22.726794    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:22.855294    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:23.077111    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:23.224949    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:23.226755    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:23.354055    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:23.577894    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:23.829228    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:23.829481    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:23.850186    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:24.085789    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:24.221843    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:24.223242    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:24.350845    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:24.576144    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:24.722460    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:24.722876    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:24.855717    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:25.078789    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:25.221847    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:25.223197    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:25.350288    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:25.575869    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:25.720994    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:25.721191    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:25.849886    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:26.075686    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:26.221033    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:26.221248    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:26.350254    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:26.576553    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:26.723542    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:26.724618    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:26.854821    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:27.075265    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:27.223181    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:27.223861    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:27.357203    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:27.576066    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:27.725031    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:27.727482    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:27.855550    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:28.077528    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:28.221782    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:28.222025    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:28.352446    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:28.578028    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:28.722358    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:28.722827    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:28.851145    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:29.076321    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:29.224578    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:29.225071    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:29.350911    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:29.575438    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:29.721700    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:29.722746    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:29.850882    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:30.075214    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:30.222969    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:30.223232    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:30.350134    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:30.575638    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:30.721474    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:30.722054    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:30.851134    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:31.076490    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:31.221564    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:31.221915    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:31.350891    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:31.576446    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:31.726903    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:31.729195    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:31.853394    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:32.078515    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:32.225337    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:32.229676    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:32.352306    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:32.578592    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:32.721947    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:32.722475    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:32.850940    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:33.079376    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:33.221359    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:33.221871    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:33.352444    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:33.576369    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:33.721322    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:33.722062    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:33.850623    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:34.076802    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:34.225793    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:34.226352    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:34.351416    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:34.590463    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:34.925817    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:34.925948    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:34.926028    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:35.077072    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:35.224215    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:35.225571    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:35.351932    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:35.576011    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:35.721458    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:35.721715    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:35.850967    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:36.075912    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:36.221210    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:36.221576    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:36.351236    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:36.576512    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:36.722383    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:36.722450    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:36.850551    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:37.075848    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:37.221365    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:37.221526    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:37.350638    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:37.575806    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:37.721253    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:37.721380    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:37.851063    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:38.075583    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:38.224329    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:38.225831    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:38.354787    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:38.578394    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:38.725450    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:38.725640    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:38.852229    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:39.075758    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:39.220907    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:39.221892    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:39.355005    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:39.578472    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:39.722363    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 04:27:39.722836    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:39.851569    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:40.076509    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:40.222384    9940 kapi.go:107] duration metric: took 35.504494079s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 04:27:40.222624    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:40.350940    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:40.575464    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:40.721894    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:40.850684    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:41.077410    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:41.227906    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:41.351684    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:41.576530    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:41.721310    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:41.850570    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:42.079845    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:42.222444    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:42.352569    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:42.578383    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:42.722408    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:42.850414    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:43.077377    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:43.221359    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:43.351831    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:43.576276    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:43.721690    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:43.851404    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:44.082117    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:44.220732    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:44.351974    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:44.576364    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:44.874716    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:44.877662    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:45.077589    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:45.222331    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:45.350369    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:45.576421    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:45.722330    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:45.851234    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:46.079881    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:46.222360    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:46.353880    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:46.579475    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:46.723915    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:46.853013    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:47.076039    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:47.225939    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:47.354615    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:47.577391    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:47.721609    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:47.852242    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:48.076635    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:48.221572    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:48.350535    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:48.578813    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:48.721708    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:48.850097    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:49.076543    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:49.222157    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:49.351049    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:49.578746    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:49.721783    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:50.311943    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:50.312001    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:50.312227    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:50.353896    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:50.575133    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:50.723766    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:50.851339    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:51.077966    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:51.220842    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:51.350808    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:51.576748    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:51.720575    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:51.850538    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:52.077089    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:52.221660    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:52.354643    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:52.575016    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:52.723673    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:52.850367    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:53.081895    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:53.225204    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:53.353603    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:53.577462    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:53.722119    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:53.851971    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:54.078279    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:54.222084    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:54.352499    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:54.577547    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:54.722655    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:54.851060    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:55.077896    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:55.227930    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:55.464752    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:55.579788    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:55.722068    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:55.851870    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:56.076238    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:56.226962    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:56.356487    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:56.580230    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:56.723727    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:56.850521    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:57.091411    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:57.225587    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:57.351524    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:57.577676    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:57.722910    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:57.851158    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:58.078039    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:58.224177    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:58.352508    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:58.579280    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:58.722928    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:58.855117    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:59.080798    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:59.222647    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:59.350630    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:27:59.581464    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:27:59.727360    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:27:59.850274    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:00.076395    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:00.222833    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:00.352571    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:00.577415    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:00.721964    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:00.851188    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:01.075290    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:01.223344    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:01.353310    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:01.577232    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:01.722097    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:01.850964    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:02.079828    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:02.226810    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:02.350596    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:02.576410    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:02.722661    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:02.853332    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:03.077157    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:03.223329    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:03.353115    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:03.668906    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:03.723801    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:03.851609    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:04.079433    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:04.222034    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:04.350990    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:04.576845    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:04.739438    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:04.852864    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:05.074875    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:05.226667    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:05.352222    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:05.576994    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:05.723186    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:05.851113    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:06.079813    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:06.221577    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:06.354571    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:06.584103    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:06.721924    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:06.852050    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:07.082112    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:07.221812    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:07.351368    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:07.582263    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:07.723402    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:07.852185    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:08.075812    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:08.221720    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:08.350751    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:08.580432    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:08.724146    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:08.855595    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:09.078720    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:09.226580    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:09.355612    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:09.576522    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:09.723332    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:10.038449    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:10.078920    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:10.225141    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:10.352350    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:10.577100    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:10.723766    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:10.853646    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:11.077374    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:11.221529    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:11.355944    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:11.579651    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:11.731522    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:11.853323    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:12.077802    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:12.226870    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:12.355234    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:12.576652    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:12.730421    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:12.851945    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:13.081415    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:13.226658    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:13.353243    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:13.579099    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:13.723065    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:13.852418    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:14.077265    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:14.224651    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:14.353055    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:14.576549    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:14.722972    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:14.851547    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:15.077576    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:15.222673    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:15.353491    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:15.578329    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:15.722425    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:15.850218    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:16.078573    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:16.222157    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:16.354677    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:16.575665    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:16.722972    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:16.852377    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:17.078904    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:17.222215    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:17.353737    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:17.575806    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:17.723258    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:17.850679    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:18.079604    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:18.228599    9940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 04:28:18.352169    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:18.580171    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:18.732208    9940 kapi.go:107] duration metric: took 1m14.014526101s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 04:28:18.851292    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:19.078359    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:19.352220    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:19.575997    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:19.886282    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:20.076413    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:20.350519    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:20.576792    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 04:28:20.851888    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:21.077715    9940 kapi.go:107] duration metric: took 1m15.005755537s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 04:28:21.352626    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:21.851864    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:22.350254    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:22.851924    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:23.352766    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:23.851889    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:24.350527    9940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 04:28:24.852310    9940 kapi.go:107] duration metric: took 1m17.005122808s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 04:28:24.853865    9940 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-153066 cluster.
	I1216 04:28:24.854911    9940 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 04:28:24.855909    9940 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 04:28:24.857104    9940 out.go:179] * Enabled addons: ingress-dns, amd-gpu-device-plugin, registry-creds, default-storageclass, inspektor-gadget, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1216 04:28:24.858189    9940 addons.go:530] duration metric: took 1m28.786633957s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin registry-creds default-storageclass inspektor-gadget nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1216 04:28:24.858230    9940 start.go:247] waiting for cluster config update ...
	I1216 04:28:24.858252    9940 start.go:256] writing updated cluster config ...
	I1216 04:28:24.858511    9940 ssh_runner.go:195] Run: rm -f paused
	I1216 04:28:24.865606    9940 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:28:24.868591    9940 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k5hzj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:24.873106    9940 pod_ready.go:94] pod "coredns-66bc5c9577-k5hzj" is "Ready"
	I1216 04:28:24.873122    9940 pod_ready.go:86] duration metric: took 4.512807ms for pod "coredns-66bc5c9577-k5hzj" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:24.875497    9940 pod_ready.go:83] waiting for pod "etcd-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:24.880402    9940 pod_ready.go:94] pod "etcd-addons-153066" is "Ready"
	I1216 04:28:24.880418    9940 pod_ready.go:86] duration metric: took 4.903012ms for pod "etcd-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:24.883591    9940 pod_ready.go:83] waiting for pod "kube-apiserver-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:24.888868    9940 pod_ready.go:94] pod "kube-apiserver-addons-153066" is "Ready"
	I1216 04:28:24.888898    9940 pod_ready.go:86] duration metric: took 5.283388ms for pod "kube-apiserver-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:24.891390    9940 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:25.270569    9940 pod_ready.go:94] pod "kube-controller-manager-addons-153066" is "Ready"
	I1216 04:28:25.270595    9940 pod_ready.go:86] duration metric: took 379.184872ms for pod "kube-controller-manager-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:25.471487    9940 pod_ready.go:83] waiting for pod "kube-proxy-h5nhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:25.870658    9940 pod_ready.go:94] pod "kube-proxy-h5nhv" is "Ready"
	I1216 04:28:25.870685    9940 pod_ready.go:86] duration metric: took 399.174437ms for pod "kube-proxy-h5nhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:26.070258    9940 pod_ready.go:83] waiting for pod "kube-scheduler-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:26.470288    9940 pod_ready.go:94] pod "kube-scheduler-addons-153066" is "Ready"
	I1216 04:28:26.470331    9940 pod_ready.go:86] duration metric: took 400.047581ms for pod "kube-scheduler-addons-153066" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 04:28:26.470343    9940 pod_ready.go:40] duration metric: took 1.60471408s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 04:28:26.515117    9940 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 04:28:26.517092    9940 out.go:179] * Done! kubectl is now configured to use "addons-153066" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.002019271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: a2654019-ae9a-44ed-ba5e-6eea0488c198,},},}" file="otel-collector/interceptors.go:62" id=b7464afb-69e7-48ba-872c-728d271fead4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.002086578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7464afb-69e7-48ba-872c-728d271fead4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.002129046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b7464afb-69e7-48ba-872c-728d271fead4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.017824820Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80b687bf-24cd-47ac-9b5f-165cc6389cd1 name=/runtime.v1.RuntimeService/Version
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.017928178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80b687bf-24cd-47ac-9b5f-165cc6389cd1 name=/runtime.v1.RuntimeService/Version
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.019581703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19097fdc-cc35-4711-b45e-e441f3ed4864 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.020803028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765859490020779842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19097fdc-cc35-4711-b45e-e441f3ed4864 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.021768202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e707ed32-af99-44e9-a407-de2a9b62cdb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.021825592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e707ed32-af99-44e9-a407-de2a9b62cdb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.022122016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9c809cddb7112d1c7f70c743a875a56ab59b90619641c72d688e3ba2d24ac3a,PodSandboxId:17eb35aa9b53a0cbfe496edc48b52aae8c351cd263c852599db6b66850208570,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765859348228109676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b42be6a9-0973-4607-a39f-f43345bc18fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1899bf0f3725a259538db15d9fcc9b1551687d0ddb914b38868eeb0ea596e2,PodSandboxId:3057e156546032d0b91e4b2a3110f83f38627941d3cb682f610b07a868e47f75,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765859311004531465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e064416f-1c71-491d-b296-b0861bd3abce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f2c27cd73ef41acf51f70ed2f2a53463e9cadb906a521bc2a9679053975ca9,PodSandboxId:2d7797ab913e2012f29dc08e8701662f5934ae23c91b89a2a821b51e92857193,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765859298209918222,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w5fvb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a81b0ad-2518-45a8-912f-6dc296e4f3fd,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:651b3c32c1f8afca500fbe414cf50f890236f2a90dd3c0369135285365c30c42,PodSandboxId:37271fa0c448ebcf5e86caef7f50024b856ec2fd142e3b06d605e4d67580da73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859277123476412,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjxvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59f3ba53-c633-46ba-85da-f30ba2227661,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:485e0f16c003025a84dc9dbceddbcbbfcf5f54e6d123ae985a2eb702d3d7bd60,PodSandboxId:53be126e98c2d62d3d28b7c196d30c305aef01ee1fec0fd42c3c7519f5b31c39,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859276991069254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7tk55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a2f1a313-0984-4131-9870-722d9503ac19,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4d570fbd10c2bc44ada6c782c62d1a117dab9964193005f6b32acdc6b37aea,PodSandboxId:0aa876268e46b217471958e00ced8d346769164ee6c78a961b003a696bf54604,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765859244093593480,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becea7ef-45d0-4bec-8470-fe1f574391a6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53d024caa8b10e97f1f0e7d346d0bb5299e8cc00164b4b3771a24398d8fc43d,PodSandboxId:af30bd1e8cf2e112556350a54dff781fc27dbedadaeb2a3a9ceecc810e6f2e36,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765859234500420408,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hhs5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c605597-4044-4415-a423-ac0bc2d63d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d065f232053c70c374d73496a42830ab0ba8afe9511c424efc1c7b52d7024ab4,PodSandboxId:7e7dc1959db100062964b803f1ebf21880904343769d85239600546e8fb1547b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765859225045969565,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261052b5-937f-4f46-8238-ab5a0913c588,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572d7b3b73779232f337aadde20111aa325376ffc431153129764d474c3172f1,PodSandboxId:82443fd12107522a7421d02a683c0eb20e501025efb36d6e8b1c5aa2af8053b3,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765859217549240089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-k5hzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c86aac94-7319-4717-b09c-4c5ce48d083b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2da8aa9cf49d54a771b62b0b36d4dcca12b05afaa1ae334b8ddc6f491c8d26a,PodSandboxId:39dfd92507902a4f0ae2045a138312531afe92d81dcdb55807b374192ee791e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765859216974023700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5nhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c8054c-fb42-44bb-96c3-b9e2b534f591,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f75243dd73b8f78264b726f85d4cefe141c6c8fb29f25f86e1e352c2302c5,PodSandboxId:cea2cbe33776e1a18bc2558f40b02902704d8f9e691c99a63bccb5a845297286,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765859205307896383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dcda41444dbf89830a69aa2ef3ed2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:596fad690cd4ff248141fb728834feee5769f975593c16b2c7310569225b0a05,PodSandboxId:7ed0f7d74790722c5225f0ee9a4c49794cd09eb16ec34ae331c0b6346d001613,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765859205315782468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d396e6c09d971f3da5ab405f520ebf96,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a668ea7a775ace807986d650432e9486841b1694cc7e3cea4aa90f9db74d4d26,PodSandboxId:2b687031565d0d353a82b315bdc7a9a49a11ae2184f22f5fd9c6dc453c8a900f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765859205296822726,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328dfa107a11db8f9546f472798d351e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3920403bed4db8081a62b794d286d69772e0b066f21464c4462b0e238f3c104f,PodSandboxId:3c0d13efc0a7352880e11f349d731bb791fb853e0e24804081ea4d1dec39ce15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765859205264336254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 481d5acbe07932cec43964b40b18e484,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e707ed32-af99-44e9-a407-de2a9b62cdb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.045253807Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.056743749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df4aed0a-2629-469e-ae36-1797ef43c9e8 name=/runtime.v1.RuntimeService/Version
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.056818713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df4aed0a-2629-469e-ae36-1797ef43c9e8 name=/runtime.v1.RuntimeService/Version
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.058634055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97c92f94-a545-4521-8c08-cb27a8491a36 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.060152308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765859490060123354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97c92f94-a545-4521-8c08-cb27a8491a36 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.061083136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9b1f941-370a-478e-a6af-dd2c79da1ea4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.061152434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9b1f941-370a-478e-a6af-dd2c79da1ea4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.061708699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9c809cddb7112d1c7f70c743a875a56ab59b90619641c72d688e3ba2d24ac3a,PodSandboxId:17eb35aa9b53a0cbfe496edc48b52aae8c351cd263c852599db6b66850208570,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765859348228109676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b42be6a9-0973-4607-a39f-f43345bc18fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1899bf0f3725a259538db15d9fcc9b1551687d0ddb914b38868eeb0ea596e2,PodSandboxId:3057e156546032d0b91e4b2a3110f83f38627941d3cb682f610b07a868e47f75,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765859311004531465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e064416f-1c71-491d-b296-b0861bd3abce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f2c27cd73ef41acf51f70ed2f2a53463e9cadb906a521bc2a9679053975ca9,PodSandboxId:2d7797ab913e2012f29dc08e8701662f5934ae23c91b89a2a821b51e92857193,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765859298209918222,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w5fvb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a81b0ad-2518-45a8-912f-6dc296e4f3fd,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:651b3c32c1f8afca500fbe414cf50f890236f2a90dd3c0369135285365c30c42,PodSandboxId:37271fa0c448ebcf5e86caef7f50024b856ec2fd142e3b06d605e4d67580da73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859277123476412,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjxvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59f3ba53-c633-46ba-85da-f30ba2227661,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:485e0f16c003025a84dc9dbceddbcbbfcf5f54e6d123ae985a2eb702d3d7bd60,PodSandboxId:53be126e98c2d62d3d28b7c196d30c305aef01ee1fec0fd42c3c7519f5b31c39,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859276991069254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7tk55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a2f1a313-0984-4131-9870-722d9503ac19,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4d570fbd10c2bc44ada6c782c62d1a117dab9964193005f6b32acdc6b37aea,PodSandboxId:0aa876268e46b217471958e00ced8d346769164ee6c78a961b003a696bf54604,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765859244093593480,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becea7ef-45d0-4bec-8470-fe1f574391a6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53d024caa8b10e97f1f0e7d346d0bb5299e8cc00164b4b3771a24398d8fc43d,PodSandboxId:af30bd1e8cf2e112556350a54dff781fc27dbedadaeb2a3a9ceecc810e6f2e36,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765859234500420408,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hhs5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c605597-4044-4415-a423-ac0bc2d63d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d065f232053c70c374d73496a42830ab0ba8afe9511c424efc1c7b52d7024ab4,PodSandboxId:7e7dc1959db100062964b803f1ebf21880904343769d85239600546e8fb1547b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765859225045969565,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261052b5-937f-4f46-8238-ab5a0913c588,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572d7b3b73779232f337aadde20111aa325376ffc431153129764d474c3172f1,PodSandboxId:82443fd12107522a7421d02a683c0eb20e501025efb36d6e8b1c5aa2af8053b3,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765859217549240089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-k5hzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c86aac94-7319-4717-b09c-4c5ce48d083b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2da8aa9cf49d54a771b62b0b36d4dcca12b05afaa1ae334b8ddc6f491c8d26a,PodSandboxId:39dfd92507902a4f0ae2045a138312531afe92d81dcdb55807b374192ee791e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765859216974023700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5nhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c8054c-fb42-44bb-96c3-b9e2b534f591,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f75243dd73b8f78264b726f85d4cefe141c6c8fb29f25f86e1e352c2302c5,PodSandboxId:cea2cbe33776e1a18bc2558f40b02902704d8f9e691c99a63bccb5a845297286,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765859205307896383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dcda41444dbf89830a69aa2ef3ed2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:596fad690cd4ff248141fb728834feee5769f975593c16b2c7310569225b0a05,PodSandboxId:7ed0f7d74790722c5225f0ee9a4c49794cd09eb16ec34ae331c0b6346d001613,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765859205315782468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d396e6c09d971f3da5ab405f520ebf96,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a668ea7a775ace807986d650432e9486841b1694cc7e3cea4aa90f9db74d4d26,PodSandboxId:2b687031565d0d353a82b315bdc7a9a49a11ae2184f22f5fd9c6dc453c8a900f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765859205296822726,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328dfa107a11db8f9546f472798d351e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3920403bed4db8081a62b794d286d69772e0b066f21464c4462b0e238f3c104f,PodSandboxId:3c0d13efc0a7352880e11f349d731bb791fb853e0e24804081ea4d1dec39ce15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765859205264336254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 481d5acbe07932cec43964b40b18e484,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9b1f941-370a-478e-a6af-dd2c79da1ea4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.094938523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59f14ae1-7f05-4fd6-8ed3-9a70923fc17e name=/runtime.v1.RuntimeService/Version
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.095018739Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59f14ae1-7f05-4fd6-8ed3-9a70923fc17e name=/runtime.v1.RuntimeService/Version
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.096593062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50cb59e6-01a8-42a4-9ac2-1dcd7eb80f7a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.098390966Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765859490098359880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50cb59e6-01a8-42a4-9ac2-1dcd7eb80f7a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.099671400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a48df7d-b113-45bc-8953-828e5454deac name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.099926988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a48df7d-b113-45bc-8953-828e5454deac name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 04:31:30 addons-153066 crio[816]: time="2025-12-16 04:31:30.100506298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b9c809cddb7112d1c7f70c743a875a56ab59b90619641c72d688e3ba2d24ac3a,PodSandboxId:17eb35aa9b53a0cbfe496edc48b52aae8c351cd263c852599db6b66850208570,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765859348228109676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b42be6a9-0973-4607-a39f-f43345bc18fe,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da1899bf0f3725a259538db15d9fcc9b1551687d0ddb914b38868eeb0ea596e2,PodSandboxId:3057e156546032d0b91e4b2a3110f83f38627941d3cb682f610b07a868e47f75,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765859311004531465,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e064416f-1c71-491d-b296-b0861bd3abce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6f2c27cd73ef41acf51f70ed2f2a53463e9cadb906a521bc2a9679053975ca9,PodSandboxId:2d7797ab913e2012f29dc08e8701662f5934ae23c91b89a2a821b51e92857193,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765859298209918222,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-w5fvb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7a81b0ad-2518-45a8-912f-6dc296e4f3fd,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:651b3c32c1f8afca500fbe414cf50f890236f2a90dd3c0369135285365c30c42,PodSandboxId:37271fa0c448ebcf5e86caef7f50024b856ec2fd142e3b06d605e4d67580da73,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859277123476412,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-cjxvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 59f3ba53-c633-46ba-85da-f30ba2227661,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:485e0f16c003025a84dc9dbceddbcbbfcf5f54e6d123ae985a2eb702d3d7bd60,PodSandboxId:53be126e98c2d62d3d28b7c196d30c305aef01ee1fec0fd42c3c7519f5b31c39,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765859276991069254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7tk55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a2f1a313-0984-4131-9870-722d9503ac19,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a4d570fbd10c2bc44ada6c782c62d1a117dab9964193005f6b32acdc6b37aea,PodSandboxId:0aa876268e46b217471958e00ced8d346769164ee6c78a961b003a696bf54604,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765859244093593480,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: becea7ef-45d0-4bec-8470-fe1f574391a6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c53d024caa8b10e97f1f0e7d346d0bb5299e8cc00164b4b3771a24398d8fc43d,PodSandboxId:af30bd1e8cf2e112556350a54dff781fc27dbedadaeb2a3a9ceecc810e6f2e36,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765859234500420408,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hhs5c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c605597-4044-4415-a423-ac0bc2d63d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d065f232053c70c374d73496a42830ab0ba8afe9511c424efc1c7b52d7024ab4,PodSandboxId:7e7dc1959db100062964b803f1ebf21880904343769d85239600546e8fb1547b,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765859225045969565,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 261052b5-937f-4f46-8238-ab5a0913c588,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:572d7b3b73779232f337aadde20111aa325376ffc431153129764d474c3172f1,PodSandboxId:82443fd12107522a7421d02a683c0eb20e501025efb36d6e8b1c5aa2af8053b3,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765859217549240089,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-k5hzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c86aac94-7319-4717-b09c-4c5ce48d083b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2da8aa9cf49d54a771b62b0b36d4dcca12b05afaa1ae334b8ddc6f491c8d26a,PodSandboxId:39dfd92507902a4f0ae2045a138312531afe92d81dcdb55807b374192ee791e0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765859216974023700,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h5nhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98c8054c-fb42-44bb-96c3-b9e2b534f591,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6f75243dd73b8f78264b726f85d4cefe141c6c8fb29f25f86e1e352c2302c5,PodSandboxId:cea2cbe33776e1a18bc2558f40b02902704d8f9e691c99a63bccb5a845297286,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765859205307896383,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dcda41444dbf89830a69aa2ef3ed2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:596fad690cd4ff248141fb728834feee5769f975593c16b2c7310569225b0a05,PodSandboxId:7ed0f7d74790722c5225f0ee9a4c49794cd09eb16ec34ae331c0b6346d001613,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765859205315782468,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d396e6c09d971f3da5ab405f520ebf96,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a668ea7a775ace807986d650432e9486841b1694cc7e3cea4aa90f9db74d4d26,PodSandboxId:2b687031565d0d353a82b315bdc7a9a49a11ae2184f22f5fd9c6dc453c8a900f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765859205296822726,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 328dfa107a11db8f9546f472798d351e,},Annotations:m
ap[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3920403bed4db8081a62b794d286d69772e0b066f21464c4462b0e238f3c104f,PodSandboxId:3c0d13efc0a7352880e11f349d731bb791fb853e0e24804081ea4d1dec39ce15,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765859205264336254,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-addons-153066,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 481d5acbe07932cec43964b40b18e484,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a48df7d-b113-45bc-8953-828e5454deac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b9c809cddb711       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   17eb35aa9b53a       nginx                                       default
	da1899bf0f372       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   3057e15654603       busybox                                     default
	d6f2c27cd73ef       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   2d7797ab913e2       ingress-nginx-controller-85d4c799dd-w5fvb   ingress-nginx
	651b3c32c1f8a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              patch                     0                   37271fa0c448e       ingress-nginx-admission-patch-cjxvw         ingress-nginx
	485e0f16c0030       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   53be126e98c2d       ingress-nginx-admission-create-7tk55        ingress-nginx
	5a4d570fbd10c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   0aa876268e46b       kube-ingress-dns-minikube                   kube-system
	c53d024caa8b1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   af30bd1e8cf2e       amd-gpu-device-plugin-hhs5c                 kube-system
	d065f232053c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   7e7dc1959db10       storage-provisioner                         kube-system
	572d7b3b73779       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   82443fd121075       coredns-66bc5c9577-k5hzj                    kube-system
	b2da8aa9cf49d       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   39dfd92507902       kube-proxy-h5nhv                            kube-system
	596fad690cd4f       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   7ed0f7d747907       kube-scheduler-addons-153066                kube-system
	4a6f75243dd73       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   cea2cbe33776e       etcd-addons-153066                          kube-system
	a668ea7a775ac       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   2b687031565d0       kube-apiserver-addons-153066                kube-system
	3920403bed4db       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   3c0d13efc0a73       kube-controller-manager-addons-153066       kube-system
	
	
	==> coredns [572d7b3b73779232f337aadde20111aa325376ffc431153129764d474c3172f1] <==
	[INFO] 10.244.0.8:51008 - 36670 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00044604s
	[INFO] 10.244.0.8:51008 - 30170 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000136714s
	[INFO] 10.244.0.8:51008 - 45330 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00015605s
	[INFO] 10.244.0.8:51008 - 11419 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000213065s
	[INFO] 10.244.0.8:51008 - 2807 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000211715s
	[INFO] 10.244.0.8:51008 - 32361 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000309435s
	[INFO] 10.244.0.8:51008 - 25917 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000119812s
	[INFO] 10.244.0.8:59063 - 61784 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106261s
	[INFO] 10.244.0.8:59063 - 62099 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000293636s
	[INFO] 10.244.0.8:57569 - 48647 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073974s
	[INFO] 10.244.0.8:57569 - 48330 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007519s
	[INFO] 10.244.0.8:33855 - 20295 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000173748s
	[INFO] 10.244.0.8:33855 - 20567 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000125821s
	[INFO] 10.244.0.8:42237 - 24256 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000112922s
	[INFO] 10.244.0.8:42237 - 24003 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000427458s
	[INFO] 10.244.0.23:47614 - 8014 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001636225s
	[INFO] 10.244.0.23:41544 - 61392 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000697809s
	[INFO] 10.244.0.23:44766 - 37939 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000193283s
	[INFO] 10.244.0.23:47130 - 44624 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000115496s
	[INFO] 10.244.0.23:36193 - 53897 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000205927s
	[INFO] 10.244.0.23:53679 - 44985 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000238401s
	[INFO] 10.244.0.23:37277 - 51910 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001583031s
	[INFO] 10.244.0.23:52153 - 31072 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00358856s
	[INFO] 10.244.0.26:37386 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000302807s
	[INFO] 10.244.0.26:51712 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164445s
	
	
	==> describe nodes <==
	Name:               addons-153066
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-153066
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=addons-153066
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T04_26_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-153066
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 04:26:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-153066
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 04:31:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 04:29:34 +0000   Tue, 16 Dec 2025 04:26:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 04:29:34 +0000   Tue, 16 Dec 2025 04:26:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 04:29:34 +0000   Tue, 16 Dec 2025 04:26:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 04:29:34 +0000   Tue, 16 Dec 2025 04:26:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    addons-153066
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9b6581480b74e0a92c84d21ede24ac3
	  System UUID:                b9b65814-80b7-4e0a-92c8-4d21ede24ac3
	  Boot ID:                    c252c32f-203c-4e98-a15c-5bb5727105f2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     hello-world-app-5d498dc89-7bj4k              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-w5fvb    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m26s
	  kube-system                 amd-gpu-device-plugin-hhs5c                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 coredns-66bc5c9577-k5hzj                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m34s
	  kube-system                 etcd-addons-153066                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m40s
	  kube-system                 kube-apiserver-addons-153066                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-153066        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-proxy-h5nhv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-scheduler-addons-153066                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m32s                  kube-proxy       
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s (x8 over 4m47s)  kubelet          Node addons-153066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x8 over 4m47s)  kubelet          Node addons-153066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x7 over 4m47s)  kubelet          Node addons-153066 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s                  kubelet          Node addons-153066 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s                  kubelet          Node addons-153066 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s                  kubelet          Node addons-153066 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m39s                  kubelet          Node addons-153066 status is now: NodeReady
	  Normal  RegisteredNode           4m36s                  node-controller  Node addons-153066 event: Registered Node addons-153066 in Controller
	
	
	==> dmesg <==
	[  +0.103587] kauditd_printk_skb: 437 callbacks suppressed
	[  +5.929153] kauditd_printk_skb: 245 callbacks suppressed
	[  +9.589881] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.679554] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.656177] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.521836] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.654502] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.127981] kauditd_printk_skb: 20 callbacks suppressed
	[Dec16 04:28] kauditd_printk_skb: 192 callbacks suppressed
	[  +1.977677] kauditd_printk_skb: 120 callbacks suppressed
	[  +6.151932] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.780054] kauditd_printk_skb: 32 callbacks suppressed
	[  +2.242811] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.774591] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.873439] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.022031] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 57 callbacks suppressed
	[Dec16 04:29] kauditd_printk_skb: 129 callbacks suppressed
	[  +3.295583] kauditd_printk_skb: 173 callbacks suppressed
	[  +1.849988] kauditd_printk_skb: 106 callbacks suppressed
	[  +1.805160] kauditd_printk_skb: 96 callbacks suppressed
	[  +0.000313] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.918307] kauditd_printk_skb: 41 callbacks suppressed
	[  +7.727768] kauditd_printk_skb: 127 callbacks suppressed
	[Dec16 04:31] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [4a6f75243dd73b8f78264b726f85d4cefe141c6c8fb29f25f86e1e352c2302c5] <==
	{"level":"warn","ts":"2025-12-16T04:27:50.304780Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T04:27:49.917199Z","time spent":"387.576146ms","remote":"127.0.0.1:60342","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":28,"request content":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 "}
	{"level":"warn","ts":"2025-12-16T04:27:50.304880Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"457.745222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:27:50.304969Z","caller":"traceutil/trace.go:172","msg":"trace[1830167884] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1019; }","duration":"457.834126ms","start":"2025-12-16T04:27:49.847130Z","end":"2025-12-16T04:27:50.304964Z","steps":["trace[1830167884] 'agreement among raft nodes before linearized reading'  (duration: 457.735529ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T04:27:50.305075Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T04:27:49.847113Z","time spent":"457.954785ms","remote":"127.0.0.1:59592","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-16T04:27:50.306684Z","caller":"traceutil/trace.go:172","msg":"trace[1705315545] transaction","detail":"{read_only:false; response_revision:1020; number_of_response:1; }","duration":"217.426034ms","start":"2025-12-16T04:27:50.089250Z","end":"2025-12-16T04:27:50.306676Z","steps":["trace[1705315545] 'process raft request'  (duration: 215.227705ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T04:27:55.458732Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.663759ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:27:55.458811Z","caller":"traceutil/trace.go:172","msg":"trace[433126633] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1031; }","duration":"119.792212ms","start":"2025-12-16T04:27:55.339002Z","end":"2025-12-16T04:27:55.458794Z","steps":["trace[433126633] 'agreement among raft nodes before linearized reading'  (duration: 68.578013ms)","trace[433126633] 'range keys from in-memory index tree'  (duration: 51.096611ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T04:27:55.458842Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.019484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:27:55.458911Z","caller":"traceutil/trace.go:172","msg":"trace[848838298] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1031; }","duration":"111.109059ms","start":"2025-12-16T04:27:55.347792Z","end":"2025-12-16T04:27:55.458901Z","steps":["trace[848838298] 'agreement among raft nodes before linearized reading'  (duration: 110.958009ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T04:27:55.459421Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.182506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:27:55.459517Z","caller":"traceutil/trace.go:172","msg":"trace[2135363795] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1031; }","duration":"113.282108ms","start":"2025-12-16T04:27:55.346228Z","end":"2025-12-16T04:27:55.459510Z","steps":["trace[2135363795] 'agreement among raft nodes before linearized reading'  (duration: 113.163666ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:28:03.663490Z","caller":"traceutil/trace.go:172","msg":"trace[1825020145] transaction","detail":"{read_only:false; response_revision:1096; number_of_response:1; }","duration":"127.506566ms","start":"2025-12-16T04:28:03.535965Z","end":"2025-12-16T04:28:03.663471Z","steps":["trace[1825020145] 'process raft request'  (duration: 127.378292ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:28:10.031424Z","caller":"traceutil/trace.go:172","msg":"trace[115661521] linearizableReadLoop","detail":"{readStateIndex:1174; appliedIndex:1174; }","duration":"215.26937ms","start":"2025-12-16T04:28:09.816140Z","end":"2025-12-16T04:28:10.031409Z","steps":["trace[115661521] 'read index received'  (duration: 215.26508ms)","trace[115661521] 'applied index is now lower than readState.Index'  (duration: 3.82µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T04:28:10.032670Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"216.51431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:28:10.032699Z","caller":"traceutil/trace.go:172","msg":"trace[1957910568] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1144; }","duration":"216.554598ms","start":"2025-12-16T04:28:09.816137Z","end":"2025-12-16T04:28:10.032692Z","steps":["trace[1957910568] 'agreement among raft nodes before linearized reading'  (duration: 216.483054ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T04:28:10.032974Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.482559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T04:28:10.033025Z","caller":"traceutil/trace.go:172","msg":"trace[1037108114] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1144; }","duration":"186.539408ms","start":"2025-12-16T04:28:09.846479Z","end":"2025-12-16T04:28:10.033018Z","steps":["trace[1037108114] 'agreement among raft nodes before linearized reading'  (duration: 186.46712ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:28:19.771188Z","caller":"traceutil/trace.go:172","msg":"trace[1729950387] transaction","detail":"{read_only:false; response_revision:1176; number_of_response:1; }","duration":"101.410828ms","start":"2025-12-16T04:28:19.669721Z","end":"2025-12-16T04:28:19.771132Z","steps":["trace[1729950387] 'process raft request'  (duration: 100.42485ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:28:51.603726Z","caller":"traceutil/trace.go:172","msg":"trace[1753626828] transaction","detail":"{read_only:false; response_revision:1353; number_of_response:1; }","duration":"112.097375ms","start":"2025-12-16T04:28:51.491604Z","end":"2025-12-16T04:28:51.603702Z","steps":["trace[1753626828] 'process raft request'  (duration: 111.747327ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T04:28:53.229810Z","caller":"traceutil/trace.go:172","msg":"trace[518491461] transaction","detail":"{read_only:false; response_revision:1355; number_of_response:1; }","duration":"381.117813ms","start":"2025-12-16T04:28:52.848675Z","end":"2025-12-16T04:28:53.229793Z","steps":["trace[518491461] 'process raft request'  (duration: 381.000848ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T04:28:53.229982Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T04:28:52.848649Z","time spent":"381.223566ms","remote":"127.0.0.1:59714","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1347 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2025-12-16T04:28:53.749168Z","caller":"traceutil/trace.go:172","msg":"trace[938438409] linearizableReadLoop","detail":"{readStateIndex:1396; appliedIndex:1396; }","duration":"111.558845ms","start":"2025-12-16T04:28:53.637592Z","end":"2025-12-16T04:28:53.749151Z","steps":["trace[938438409] 'read index received'  (duration: 111.552232ms)","trace[938438409] 'applied index is now lower than readState.Index'  (duration: 5.756µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-16T04:28:53.749341Z","caller":"traceutil/trace.go:172","msg":"trace[1468599566] transaction","detail":"{read_only:false; response_revision:1356; number_of_response:1; }","duration":"125.597826ms","start":"2025-12-16T04:28:53.623732Z","end":"2025-12-16T04:28:53.749330Z","steps":["trace[1468599566] 'process raft request'  (duration: 125.438909ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T04:28:53.749450Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.843075ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2025-12-16T04:28:53.749475Z","caller":"traceutil/trace.go:172","msg":"trace[320375236] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1356; }","duration":"111.881768ms","start":"2025-12-16T04:28:53.637587Z","end":"2025-12-16T04:28:53.749469Z","steps":["trace[320375236] 'agreement among raft nodes before linearized reading'  (duration: 111.740874ms)"],"step_count":1}
	
	
	==> kernel <==
	 04:31:30 up 5 min,  0 users,  load average: 1.12, 1.56, 0.79
	Linux addons-153066 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a668ea7a775ace807986d650432e9486841b1694cc7e3cea4aa90f9db74d4d26] <==
	E1216 04:27:49.434708       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.195.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.195.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.195.173:443: connect: connection refused" logger="UnhandledError"
	E1216 04:27:49.436875       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.195.173:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.195.173:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.195.173:443: connect: connection refused" logger="UnhandledError"
	I1216 04:27:49.546766       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 04:28:38.287830       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8443->192.168.39.1:51728: use of closed network connection
	E1216 04:28:38.495772       1 conn.go:339] Error on socket receive: read tcp 192.168.39.189:8443->192.168.39.1:51758: use of closed network connection
	I1216 04:28:47.578145       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.2.168"}
	I1216 04:29:00.914370       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 04:29:01.119375       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.125.73"}
	I1216 04:29:14.521583       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1216 04:29:35.757267       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1216 04:29:41.144669       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 04:29:41.144793       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 04:29:41.182929       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 04:29:41.183024       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 04:29:41.184690       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 04:29:41.184737       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 04:29:41.202954       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 04:29:41.203061       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 04:29:41.227427       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 04:29:41.227467       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1216 04:29:42.185950       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1216 04:29:42.230661       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1216 04:29:42.243389       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1216 04:29:50.468152       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1216 04:31:29.005965       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.103.94"}
	
	
	==> kube-controller-manager [3920403bed4db8081a62b794d286d69772e0b066f21464c4462b0e238f3c104f] <==
	E1216 04:29:50.097648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 04:29:52.156066       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:29:52.157421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1216 04:29:55.151774       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1216 04:29:55.151903       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 04:29:55.248853       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1216 04:29:55.248949       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1216 04:29:56.722993       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:29:56.724076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 04:29:59.574508       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:29:59.576550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 04:30:00.356470       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:30:00.357576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 04:30:11.613467       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:30:11.614787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 04:30:16.180198       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:30:16.181149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 04:30:22.796768       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:30:22.797917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 04:30:48.847184       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:30:48.848265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 04:30:59.455199       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:30:59.456356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 04:31:12.121619       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 04:31:12.122886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [b2da8aa9cf49d54a771b62b0b36d4dcca12b05afaa1ae334b8ddc6f491c8d26a] <==
	I1216 04:26:57.660859       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 04:26:57.863491       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 04:26:57.867975       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.189"]
	E1216 04:26:57.879729       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 04:26:58.160977       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1216 04:26:58.161259       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 04:26:58.161473       1 server_linux.go:132] "Using iptables Proxier"
	I1216 04:26:58.193423       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 04:26:58.193761       1 server.go:527] "Version info" version="v1.34.2"
	I1216 04:26:58.193773       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 04:26:58.199379       1 config.go:106] "Starting endpoint slice config controller"
	I1216 04:26:58.199394       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 04:26:58.200491       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 04:26:58.200499       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 04:26:58.201173       1 config.go:309] "Starting node config controller"
	I1216 04:26:58.201179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 04:26:58.201184       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 04:26:58.206919       1 config.go:200] "Starting service config controller"
	I1216 04:26:58.207589       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 04:26:58.300660       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 04:26:58.300716       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 04:26:58.308028       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [596fad690cd4ff248141fb728834feee5769f975593c16b2c7310569225b0a05] <==
	E1216 04:26:48.053426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 04:26:48.053506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:26:48.053617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1216 04:26:48.044233       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1216 04:26:48.053940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 04:26:48.055058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 04:26:48.055168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:26:48.055396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 04:26:48.056021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 04:26:48.057613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:26:48.880473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 04:26:48.902919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 04:26:48.910689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 04:26:48.947658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 04:26:49.043536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 04:26:49.086723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 04:26:49.120559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 04:26:49.145911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1216 04:26:49.212837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 04:26:49.250007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1216 04:26:49.271786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 04:26:49.359456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 04:26:49.365672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1216 04:26:49.375336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1216 04:26:52.545660       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 04:29:54 addons-153066 kubelet[1511]: I1216 04:29:54.087212    1511 scope.go:117] "RemoveContainer" containerID="420853799a3443a976842aa6505c894cef77bae1c6a3f9f045b830f405f607c9"
	Dec 16 04:29:54 addons-153066 kubelet[1511]: I1216 04:29:54.206077    1511 scope.go:117] "RemoveContainer" containerID="bb8f65c76e45e9f6f722d1fa821cd3b4655159f24d8140f9c3af0a1ab68b5dff"
	Dec 16 04:29:54 addons-153066 kubelet[1511]: I1216 04:29:54.330153    1511 scope.go:117] "RemoveContainer" containerID="c8d31a8f6f3088e13d66b5bec43b0837f24d42a9805bb334ecc3af167f52fbcd"
	Dec 16 04:30:00 addons-153066 kubelet[1511]: I1216 04:30:00.597042    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:30:01 addons-153066 kubelet[1511]: E1216 04:30:01.081720    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859401081116170 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:01 addons-153066 kubelet[1511]: E1216 04:30:01.081747    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859401081116170 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:11 addons-153066 kubelet[1511]: E1216 04:30:11.086930    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859411085340839 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:11 addons-153066 kubelet[1511]: E1216 04:30:11.086974    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859411085340839 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:21 addons-153066 kubelet[1511]: E1216 04:30:21.089801    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859421089419175 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:21 addons-153066 kubelet[1511]: E1216 04:30:21.089844    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859421089419175 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:31 addons-153066 kubelet[1511]: E1216 04:30:31.095747    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859431094081659 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:31 addons-153066 kubelet[1511]: E1216 04:30:31.095775    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859431094081659 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:41 addons-153066 kubelet[1511]: E1216 04:30:41.099444    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859441098557380 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:41 addons-153066 kubelet[1511]: E1216 04:30:41.099470    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859441098557380 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:51 addons-153066 kubelet[1511]: E1216 04:30:51.102148    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859451101625750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:30:51 addons-153066 kubelet[1511]: E1216 04:30:51.102553    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859451101625750 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:31:01 addons-153066 kubelet[1511]: E1216 04:31:01.106088    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859461105764696 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:31:01 addons-153066 kubelet[1511]: E1216 04:31:01.106108    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859461105764696 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:31:07 addons-153066 kubelet[1511]: I1216 04:31:07.596602    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:31:09 addons-153066 kubelet[1511]: I1216 04:31:09.596058    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hhs5c" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 04:31:11 addons-153066 kubelet[1511]: E1216 04:31:11.109675    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859471109253167 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:31:11 addons-153066 kubelet[1511]: E1216 04:31:11.109704    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859471109253167 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:31:21 addons-153066 kubelet[1511]: E1216 04:31:21.114013    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765859481112858006 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:31:21 addons-153066 kubelet[1511]: E1216 04:31:21.114400    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765859481112858006 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 04:31:29 addons-153066 kubelet[1511]: I1216 04:31:29.042728    1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmt6w\" (UniqueName: \"kubernetes.io/projected/a2654019-ae9a-44ed-ba5e-6eea0488c198-kube-api-access-fmt6w\") pod \"hello-world-app-5d498dc89-7bj4k\" (UID: \"a2654019-ae9a-44ed-ba5e-6eea0488c198\") " pod="default/hello-world-app-5d498dc89-7bj4k"
	
	
	==> storage-provisioner [d065f232053c70c374d73496a42830ab0ba8afe9511c424efc1c7b52d7024ab4] <==
	W1216 04:31:06.499107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:08.503395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:08.508110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:10.512767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:10.517918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:12.521816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:12.529563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:14.533480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:14.538641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:16.542813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:16.548147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:18.552008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:18.557497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:20.561718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:20.569163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:22.572877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:22.578627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:24.582181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:24.589586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:26.593872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:26.602611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:28.606256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:28.613739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:30.619451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 04:31:30.627170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-153066 -n addons-153066
helpers_test.go:270: (dbg) Run:  kubectl --context addons-153066 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-7bj4k ingress-nginx-admission-create-7tk55 ingress-nginx-admission-patch-cjxvw
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-153066 describe pod hello-world-app-5d498dc89-7bj4k ingress-nginx-admission-create-7tk55 ingress-nginx-admission-patch-cjxvw
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-153066 describe pod hello-world-app-5d498dc89-7bj4k ingress-nginx-admission-create-7tk55 ingress-nginx-admission-patch-cjxvw: exit status 1 (78.566856ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-7bj4k
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-153066/192.168.39.189
	Start Time:       Tue, 16 Dec 2025 04:31:28 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fmt6w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fmt6w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-7bj4k to addons-153066
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7tk55" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-cjxvw" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-153066 describe pod hello-world-app-5d498dc89-7bj4k ingress-nginx-admission-create-7tk55 ingress-nginx-admission-patch-cjxvw: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 addons disable ingress --alsologtostderr -v=1: (7.756327603s)
--- FAIL: TestAddons/parallel/Ingress (159.21s)

                                                
                                    
x
+
TestPreload (146.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-992301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1216 05:18:27.161048    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-992301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m28.668955399s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-992301 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-992301 image pull gcr.io/k8s-minikube/busybox: (3.902673326s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-992301
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-992301: (7.099475026s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-992301 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1216 05:20:01.705420    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-992301 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (44.165435068s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-992301 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-16 05:20:38.822111248 +0000 UTC m=+3316.341323108
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-992301 -n test-preload-992301
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-992301 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p test-preload-992301 logs -n 25: (1.063831845s)
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-581749 ssh -n multinode-581749-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:07 UTC │ 16 Dec 25 05:07 UTC │
	│ ssh     │ multinode-581749 ssh -n multinode-581749 sudo cat /home/docker/cp-test_multinode-581749-m03_multinode-581749.txt                                          │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:07 UTC │ 16 Dec 25 05:07 UTC │
	│ cp      │ multinode-581749 cp multinode-581749-m03:/home/docker/cp-test.txt multinode-581749-m02:/home/docker/cp-test_multinode-581749-m03_multinode-581749-m02.txt │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:07 UTC │ 16 Dec 25 05:07 UTC │
	│ ssh     │ multinode-581749 ssh -n multinode-581749-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:07 UTC │ 16 Dec 25 05:07 UTC │
	│ ssh     │ multinode-581749 ssh -n multinode-581749-m02 sudo cat /home/docker/cp-test_multinode-581749-m03_multinode-581749-m02.txt                                  │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:07 UTC │ 16 Dec 25 05:07 UTC │
	│ node    │ multinode-581749 node stop m03                                                                                                                            │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:07 UTC │ 16 Dec 25 05:07 UTC │
	│ node    │ multinode-581749 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:07 UTC │ 16 Dec 25 05:08 UTC │
	│ node    │ list -p multinode-581749                                                                                                                                  │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:08 UTC │                     │
	│ stop    │ -p multinode-581749                                                                                                                                       │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:08 UTC │ 16 Dec 25 05:11 UTC │
	│ start   │ -p multinode-581749 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:11 UTC │ 16 Dec 25 05:13 UTC │
	│ node    │ list -p multinode-581749                                                                                                                                  │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:13 UTC │                     │
	│ node    │ multinode-581749 node delete m03                                                                                                                          │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:13 UTC │ 16 Dec 25 05:13 UTC │
	│ stop    │ multinode-581749 stop                                                                                                                                     │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:13 UTC │ 16 Dec 25 05:16 UTC │
	│ start   │ -p multinode-581749 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:16 UTC │ 16 Dec 25 05:17 UTC │
	│ node    │ list -p multinode-581749                                                                                                                                  │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ start   │ -p multinode-581749-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-581749-m02 │ jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │                     │
	│ start   │ -p multinode-581749-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-581749-m03 │ jenkins │ v1.37.0 │ 16 Dec 25 05:17 UTC │ 16 Dec 25 05:18 UTC │
	│ node    │ add -p multinode-581749                                                                                                                                   │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:18 UTC │                     │
	│ delete  │ -p multinode-581749-m03                                                                                                                                   │ multinode-581749-m03 │ jenkins │ v1.37.0 │ 16 Dec 25 05:18 UTC │ 16 Dec 25 05:18 UTC │
	│ delete  │ -p multinode-581749                                                                                                                                       │ multinode-581749     │ jenkins │ v1.37.0 │ 16 Dec 25 05:18 UTC │ 16 Dec 25 05:18 UTC │
	│ start   │ -p test-preload-992301 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-992301  │ jenkins │ v1.37.0 │ 16 Dec 25 05:18 UTC │ 16 Dec 25 05:19 UTC │
	│ image   │ test-preload-992301 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-992301  │ jenkins │ v1.37.0 │ 16 Dec 25 05:19 UTC │ 16 Dec 25 05:19 UTC │
	│ stop    │ -p test-preload-992301                                                                                                                                    │ test-preload-992301  │ jenkins │ v1.37.0 │ 16 Dec 25 05:19 UTC │ 16 Dec 25 05:19 UTC │
	│ start   │ -p test-preload-992301 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-992301  │ jenkins │ v1.37.0 │ 16 Dec 25 05:19 UTC │ 16 Dec 25 05:20 UTC │
	│ image   │ test-preload-992301 image list                                                                                                                            │ test-preload-992301  │ jenkins │ v1.37.0 │ 16 Dec 25 05:20 UTC │ 16 Dec 25 05:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:19:54
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:19:54.517089   34771 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:19:54.517350   34771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:19:54.517362   34771 out.go:374] Setting ErrFile to fd 2...
	I1216 05:19:54.517367   34771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:19:54.517650   34771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:19:54.518111   34771 out.go:368] Setting JSON to false
	I1216 05:19:54.519170   34771 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3736,"bootTime":1765858658,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:19:54.519235   34771 start.go:143] virtualization: kvm guest
	I1216 05:19:54.521837   34771 out.go:179] * [test-preload-992301] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:19:54.523307   34771 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:19:54.523290   34771 notify.go:221] Checking for updates...
	I1216 05:19:54.525972   34771 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:19:54.527827   34771 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 05:19:54.529280   34771 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 05:19:54.530799   34771 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:19:54.532155   34771 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:19:54.534073   34771 config.go:182] Loaded profile config "test-preload-992301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:19:54.534756   34771 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:19:54.572245   34771 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 05:19:54.573658   34771 start.go:309] selected driver: kvm2
	I1216 05:19:54.573683   34771 start.go:927] validating driver "kvm2" against &{Name:test-preload-992301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-992301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:19:54.573849   34771 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:19:54.575132   34771 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:19:54.575162   34771 cni.go:84] Creating CNI manager for ""
	I1216 05:19:54.575219   34771 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 05:19:54.575287   34771 start.go:353] cluster config:
	{Name:test-preload-992301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-992301 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:19:54.575406   34771 iso.go:125] acquiring lock: {Name:mk32a15185e6e6998579c2a7c92376b162445713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:19:54.577102   34771 out.go:179] * Starting "test-preload-992301" primary control-plane node in "test-preload-992301" cluster
	I1216 05:19:54.578415   34771 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:19:54.578452   34771 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:19:54.578464   34771 cache.go:65] Caching tarball of preloaded images
	I1216 05:19:54.578572   34771 preload.go:238] Found /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:19:54.578586   34771 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:19:54.578674   34771 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/config.json ...
	I1216 05:19:54.578933   34771 start.go:360] acquireMachinesLock for test-preload-992301: {Name:mk62c9c2852efe4dee40756b90f6ebee1eabe893 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 05:19:54.578986   34771 start.go:364] duration metric: took 30.978µs to acquireMachinesLock for "test-preload-992301"
	I1216 05:19:54.579004   34771 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:19:54.579011   34771 fix.go:54] fixHost starting: 
	I1216 05:19:54.580924   34771 fix.go:112] recreateIfNeeded on test-preload-992301: state=Stopped err=<nil>
	W1216 05:19:54.580946   34771 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:19:54.582767   34771 out.go:252] * Restarting existing kvm2 VM for "test-preload-992301" ...
	I1216 05:19:54.582807   34771 main.go:143] libmachine: starting domain...
	I1216 05:19:54.582816   34771 main.go:143] libmachine: ensuring networks are active...
	I1216 05:19:54.583713   34771 main.go:143] libmachine: Ensuring network default is active
	I1216 05:19:54.584166   34771 main.go:143] libmachine: Ensuring network mk-test-preload-992301 is active
	I1216 05:19:54.584630   34771 main.go:143] libmachine: getting domain XML...
	I1216 05:19:54.585858   34771 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-992301</name>
	  <uuid>c40e4ac1-f1ce-4089-9e51-98c94c5db49e</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/test-preload-992301/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22141-5059/.minikube/machines/test-preload-992301/test-preload-992301.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2e:4b:38'/>
	      <source network='mk-test-preload-992301'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:9d:7e:82'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1216 05:19:55.907405   34771 main.go:143] libmachine: waiting for domain to start...
	I1216 05:19:55.909068   34771 main.go:143] libmachine: domain is now running
	I1216 05:19:55.909107   34771 main.go:143] libmachine: waiting for IP...
	I1216 05:19:55.910069   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:19:55.910633   34771 main.go:143] libmachine: domain test-preload-992301 has current primary IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:19:55.910649   34771 main.go:143] libmachine: found domain IP: 192.168.39.195
	I1216 05:19:55.910658   34771 main.go:143] libmachine: reserving static IP address...
	I1216 05:19:55.911132   34771 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-992301", mac: "52:54:00:2e:4b:38", ip: "192.168.39.195"} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:18:30 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:19:55.911167   34771 main.go:143] libmachine: skip adding static IP to network mk-test-preload-992301 - found existing host DHCP lease matching {name: "test-preload-992301", mac: "52:54:00:2e:4b:38", ip: "192.168.39.195"}
	I1216 05:19:55.911184   34771 main.go:143] libmachine: reserved static IP address 192.168.39.195 for domain test-preload-992301
	I1216 05:19:55.911198   34771 main.go:143] libmachine: waiting for SSH...
	I1216 05:19:55.911205   34771 main.go:143] libmachine: Getting to WaitForSSH function...
	I1216 05:19:55.913826   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:19:55.914272   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:18:30 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:19:55.914307   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:19:55.914521   34771 main.go:143] libmachine: Using SSH client type: native
	I1216 05:19:55.914793   34771 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1216 05:19:55.914805   34771 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1216 05:19:58.973120   34771 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.195:22: connect: no route to host
	I1216 05:20:05.053062   34771 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.195:22: connect: no route to host
	I1216 05:20:08.169909   34771 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:20:08.173865   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.174358   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:08.174389   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.174650   34771 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/config.json ...
	I1216 05:20:08.174931   34771 machine.go:94] provisionDockerMachine start ...
	I1216 05:20:08.177310   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.177848   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:08.177892   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.178100   34771 main.go:143] libmachine: Using SSH client type: native
	I1216 05:20:08.178369   34771 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1216 05:20:08.178382   34771 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:20:08.293881   34771 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 05:20:08.293917   34771 buildroot.go:166] provisioning hostname "test-preload-992301"
	I1216 05:20:08.297529   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.298108   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:08.298153   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.298401   34771 main.go:143] libmachine: Using SSH client type: native
	I1216 05:20:08.298678   34771 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1216 05:20:08.298698   34771 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-992301 && echo "test-preload-992301" | sudo tee /etc/hostname
	I1216 05:20:08.448301   34771 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-992301
	
	I1216 05:20:08.451543   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.452067   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:08.452119   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.452337   34771 main.go:143] libmachine: Using SSH client type: native
	I1216 05:20:08.452557   34771 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1216 05:20:08.452572   34771 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-992301' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-992301/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-992301' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:20:08.588357   34771 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:20:08.588386   34771 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5059/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5059/.minikube}
	I1216 05:20:08.588429   34771 buildroot.go:174] setting up certificates
	I1216 05:20:08.588443   34771 provision.go:84] configureAuth start
	I1216 05:20:08.591559   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.592138   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:08.592167   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.595106   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.595518   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:08.595541   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.595695   34771 provision.go:143] copyHostCerts
	I1216 05:20:08.595787   34771 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem, removing ...
	I1216 05:20:08.595809   34771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem
	I1216 05:20:08.595895   34771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem (1082 bytes)
	I1216 05:20:08.596016   34771 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem, removing ...
	I1216 05:20:08.596028   34771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem
	I1216 05:20:08.596077   34771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem (1123 bytes)
	I1216 05:20:08.596161   34771 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem, removing ...
	I1216 05:20:08.596171   34771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem
	I1216 05:20:08.596212   34771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem (1675 bytes)
	I1216 05:20:08.596286   34771 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem org=jenkins.test-preload-992301 san=[127.0.0.1 192.168.39.195 localhost minikube test-preload-992301]
	I1216 05:20:08.644806   34771 provision.go:177] copyRemoteCerts
	I1216 05:20:08.644882   34771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:20:08.647876   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.648287   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:08.648310   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.648492   34771 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/test-preload-992301/id_rsa Username:docker}
	I1216 05:20:08.738084   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:20:08.772359   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 05:20:08.805852   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 05:20:08.838890   34771 provision.go:87] duration metric: took 250.432436ms to configureAuth
	I1216 05:20:08.838919   34771 buildroot.go:189] setting minikube options for container-runtime
	I1216 05:20:08.839122   34771 config.go:182] Loaded profile config "test-preload-992301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:20:08.842176   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.842600   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:08.842623   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:08.842854   34771 main.go:143] libmachine: Using SSH client type: native
	I1216 05:20:08.843095   34771 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1216 05:20:08.843110   34771 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:20:09.104434   34771 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:20:09.104466   34771 machine.go:97] duration metric: took 929.520032ms to provisionDockerMachine
	I1216 05:20:09.104482   34771 start.go:293] postStartSetup for "test-preload-992301" (driver="kvm2")
	I1216 05:20:09.104495   34771 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:20:09.104595   34771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:20:09.107534   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.107997   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:09.108051   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.108268   34771 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/test-preload-992301/id_rsa Username:docker}
	I1216 05:20:09.196077   34771 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:20:09.201616   34771 info.go:137] Remote host: Buildroot 2025.02
	I1216 05:20:09.201654   34771 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/addons for local assets ...
	I1216 05:20:09.201732   34771 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/files for local assets ...
	I1216 05:20:09.201852   34771 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem -> 89872.pem in /etc/ssl/certs
	I1216 05:20:09.202001   34771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:20:09.215874   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem --> /etc/ssl/certs/89872.pem (1708 bytes)
	I1216 05:20:09.249175   34771 start.go:296] duration metric: took 144.675029ms for postStartSetup
	I1216 05:20:09.249229   34771 fix.go:56] duration metric: took 14.67021774s for fixHost
	I1216 05:20:09.252348   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.252812   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:09.252837   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.253013   34771 main.go:143] libmachine: Using SSH client type: native
	I1216 05:20:09.253225   34771 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I1216 05:20:09.253241   34771 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 05:20:09.366849   34771 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765862409.332660194
	
	I1216 05:20:09.366873   34771 fix.go:216] guest clock: 1765862409.332660194
	I1216 05:20:09.366883   34771 fix.go:229] Guest: 2025-12-16 05:20:09.332660194 +0000 UTC Remote: 2025-12-16 05:20:09.24923379 +0000 UTC m=+14.782804310 (delta=83.426404ms)
	I1216 05:20:09.366904   34771 fix.go:200] guest clock delta is within tolerance: 83.426404ms
	I1216 05:20:09.366910   34771 start.go:83] releasing machines lock for "test-preload-992301", held for 14.787914802s
	I1216 05:20:09.369958   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.370363   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:09.370394   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.370979   34771 ssh_runner.go:195] Run: cat /version.json
	I1216 05:20:09.371052   34771 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:20:09.374004   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.374175   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.374433   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:09.374466   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.374631   34771 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/test-preload-992301/id_rsa Username:docker}
	I1216 05:20:09.374838   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:09.374864   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:09.375077   34771 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/test-preload-992301/id_rsa Username:docker}
	I1216 05:20:09.483467   34771 ssh_runner.go:195] Run: systemctl --version
	I1216 05:20:09.490447   34771 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:20:09.651060   34771 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:20:09.659330   34771 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:20:09.659397   34771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:20:09.682241   34771 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 05:20:09.682282   34771 start.go:496] detecting cgroup driver to use...
	I1216 05:20:09.682355   34771 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:20:09.703362   34771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:20:09.722920   34771 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:20:09.722985   34771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:20:09.743301   34771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:20:09.761996   34771 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:20:09.919994   34771 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:20:10.145763   34771 docker.go:234] disabling docker service ...
	I1216 05:20:10.145843   34771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:20:10.162886   34771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:20:10.179797   34771 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:20:10.345003   34771 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:20:10.500582   34771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:20:10.517339   34771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:20:10.543645   34771 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:20:10.543709   34771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:20:10.557047   34771 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 05:20:10.557126   34771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:20:10.571031   34771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:20:10.584908   34771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:20:10.598509   34771 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:20:10.612616   34771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:20:10.626023   34771 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:20:10.649159   34771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:20:10.662975   34771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:20:10.674512   34771 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 05:20:10.674579   34771 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 05:20:10.696546   34771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:20:10.709280   34771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:20:10.862945   34771 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:20:10.975319   34771 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:20:10.975420   34771 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:20:10.981492   34771 start.go:564] Will wait 60s for crictl version
	I1216 05:20:10.981565   34771 ssh_runner.go:195] Run: which crictl
	I1216 05:20:10.986554   34771 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 05:20:11.025782   34771 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 05:20:11.025869   34771 ssh_runner.go:195] Run: crio --version
	I1216 05:20:11.057221   34771 ssh_runner.go:195] Run: crio --version
	I1216 05:20:11.090664   34771 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1216 05:20:11.094761   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:11.095170   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:11.095192   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:11.095400   34771 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 05:20:11.100727   34771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:20:11.117391   34771 kubeadm.go:884] updating cluster {Name:test-preload-992301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-992301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:20:11.117495   34771 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:20:11.117532   34771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:20:11.153784   34771 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1216 05:20:11.153856   34771 ssh_runner.go:195] Run: which lz4
	I1216 05:20:11.158557   34771 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 05:20:11.164033   34771 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 05:20:11.164085   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1216 05:20:12.556514   34771 crio.go:462] duration metric: took 1.397982171s to copy over tarball
	I1216 05:20:12.556590   34771 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 05:20:14.126431   34771 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.569812647s)
	I1216 05:20:14.126462   34771 crio.go:469] duration metric: took 1.569916647s to extract the tarball
	I1216 05:20:14.126469   34771 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 05:20:14.164097   34771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:20:14.203876   34771 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:20:14.203907   34771 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:20:14.203917   34771 kubeadm.go:935] updating node { 192.168.39.195 8443 v1.34.2 crio true true} ...
	I1216 05:20:14.204028   34771 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-992301 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-992301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:20:14.204111   34771 ssh_runner.go:195] Run: crio config
	I1216 05:20:14.252473   34771 cni.go:84] Creating CNI manager for ""
	I1216 05:20:14.252495   34771 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 05:20:14.252512   34771 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:20:14.252531   34771 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-992301 NodeName:test-preload-992301 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:20:14.252666   34771 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-992301"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.195"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:20:14.252731   34771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:20:14.265924   34771 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:20:14.266053   34771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:20:14.278869   34771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1216 05:20:14.301568   34771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:20:14.324665   34771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1216 05:20:14.347734   34771 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I1216 05:20:14.352357   34771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 05:20:14.368632   34771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:20:14.523264   34771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:20:14.557949   34771 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301 for IP: 192.168.39.195
	I1216 05:20:14.557987   34771 certs.go:195] generating shared ca certs ...
	I1216 05:20:14.558014   34771 certs.go:227] acquiring lock for ca certs: {Name:mkeb038c86653b42975db55bc13142d606c3d109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:20:14.558219   34771 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key
	I1216 05:20:14.558283   34771 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key
	I1216 05:20:14.558299   34771 certs.go:257] generating profile certs ...
	I1216 05:20:14.558452   34771 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/client.key
	I1216 05:20:14.558535   34771 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/apiserver.key.390433c9
	I1216 05:20:14.558605   34771 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/proxy-client.key
	I1216 05:20:14.558800   34771 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987.pem (1338 bytes)
	W1216 05:20:14.558855   34771 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987_empty.pem, impossibly tiny 0 bytes
	I1216 05:20:14.558871   34771 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:20:14.558907   34771 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:20:14.558943   34771 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:20:14.558980   34771 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem (1675 bytes)
	I1216 05:20:14.559049   34771 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem (1708 bytes)
	I1216 05:20:14.559988   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:20:14.600870   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:20:14.648028   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:20:14.681487   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 05:20:14.714474   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 05:20:14.748393   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:20:14.781569   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:20:14.814438   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 05:20:14.846917   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem --> /usr/share/ca-certificates/89872.pem (1708 bytes)
	I1216 05:20:14.879626   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:20:14.913565   34771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987.pem --> /usr/share/ca-certificates/8987.pem (1338 bytes)
	I1216 05:20:14.946047   34771 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:20:14.968459   34771 ssh_runner.go:195] Run: openssl version
	I1216 05:20:14.975571   34771 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89872.pem
	I1216 05:20:14.988326   34771 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89872.pem /etc/ssl/certs/89872.pem
	I1216 05:20:15.001073   34771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89872.pem
	I1216 05:20:15.007566   34771 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:37 /usr/share/ca-certificates/89872.pem
	I1216 05:20:15.007642   34771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89872.pem
	I1216 05:20:15.015815   34771 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:20:15.028928   34771 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89872.pem /etc/ssl/certs/3ec20f2e.0
	I1216 05:20:15.042449   34771 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:20:15.055524   34771 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:20:15.068666   34771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:20:15.074416   34771 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:20:15.074500   34771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:20:15.082451   34771 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:20:15.095784   34771 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 05:20:15.109250   34771 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8987.pem
	I1216 05:20:15.122388   34771 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8987.pem /etc/ssl/certs/8987.pem
	I1216 05:20:15.135138   34771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8987.pem
	I1216 05:20:15.140671   34771 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:37 /usr/share/ca-certificates/8987.pem
	I1216 05:20:15.140728   34771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8987.pem
	I1216 05:20:15.148399   34771 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:20:15.160806   34771 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8987.pem /etc/ssl/certs/51391683.0
	I1216 05:20:15.174178   34771 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:20:15.179944   34771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:20:15.187912   34771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:20:15.195653   34771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:20:15.203791   34771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:20:15.211767   34771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:20:15.219670   34771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:20:15.227360   34771 kubeadm.go:401] StartCluster: {Name:test-preload-992301 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-992301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:20:15.227438   34771 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:20:15.227501   34771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:20:15.264304   34771 cri.go:89] found id: ""
	I1216 05:20:15.264394   34771 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 05:20:15.280202   34771 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 05:20:15.280221   34771 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 05:20:15.280265   34771 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 05:20:15.294149   34771 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:20:15.294584   34771 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-992301" does not appear in /home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 05:20:15.294712   34771 kubeconfig.go:62] /home/jenkins/minikube-integration/22141-5059/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-992301" cluster setting kubeconfig missing "test-preload-992301" context setting]
	I1216 05:20:15.295040   34771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/kubeconfig: {Name:mk2e0aa2a9ecd47e0407b52e183f6fd294eb595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:20:15.295583   34771 kapi.go:59] client config for test-preload-992301: &rest.Config{Host:"https://192.168.39.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/client.key", CAFile:"/home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:20:15.296033   34771 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 05:20:15.296050   34771 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 05:20:15.296055   34771 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 05:20:15.296059   34771 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 05:20:15.296065   34771 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 05:20:15.296460   34771 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 05:20:15.310071   34771 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.195
	I1216 05:20:15.310109   34771 kubeadm.go:1161] stopping kube-system containers ...
	I1216 05:20:15.310119   34771 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 05:20:15.310167   34771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:20:15.354536   34771 cri.go:89] found id: ""
	I1216 05:20:15.354639   34771 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 05:20:15.376458   34771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 05:20:15.389355   34771 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 05:20:15.389380   34771 kubeadm.go:158] found existing configuration files:
	
	I1216 05:20:15.389432   34771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 05:20:15.401008   34771 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 05:20:15.401069   34771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 05:20:15.413831   34771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 05:20:15.425360   34771 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 05:20:15.425427   34771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 05:20:15.438251   34771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 05:20:15.450142   34771 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 05:20:15.450205   34771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 05:20:15.463043   34771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 05:20:15.474936   34771 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 05:20:15.474992   34771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 05:20:15.488135   34771 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 05:20:15.500891   34771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:20:15.562618   34771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:20:17.063582   34771 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.500918707s)
	I1216 05:20:17.063661   34771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:20:17.338746   34771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:20:17.406872   34771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:20:17.509260   34771 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:20:17.509365   34771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:20:18.009754   34771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:20:18.510232   34771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:20:19.010437   34771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:20:19.510065   34771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:20:19.565859   34771 api_server.go:72] duration metric: took 2.05661622s to wait for apiserver process to appear ...
	I1216 05:20:19.565883   34771 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:20:19.565899   34771 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1216 05:20:22.184743   34771 api_server.go:279] https://192.168.39.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 05:20:22.184781   34771 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 05:20:22.184795   34771 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1216 05:20:22.224233   34771 api_server.go:279] https://192.168.39.195:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 05:20:22.224263   34771 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 05:20:22.566853   34771 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1216 05:20:22.574053   34771 api_server.go:279] https://192.168.39.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:20:22.574090   34771 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:20:23.066805   34771 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1216 05:20:23.071976   34771 api_server.go:279] https://192.168.39.195:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 05:20:23.072003   34771 api_server.go:103] status: https://192.168.39.195:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 05:20:23.566724   34771 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1216 05:20:23.571653   34771 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1216 05:20:23.579998   34771 api_server.go:141] control plane version: v1.34.2
	I1216 05:20:23.580024   34771 api_server.go:131] duration metric: took 4.014135399s to wait for apiserver health ...
	I1216 05:20:23.580033   34771 cni.go:84] Creating CNI manager for ""
	I1216 05:20:23.580039   34771 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 05:20:23.581870   34771 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 05:20:23.583179   34771 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 05:20:23.606694   34771 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 05:20:23.648457   34771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:20:23.654949   34771 system_pods.go:59] 7 kube-system pods found
	I1216 05:20:23.654989   34771 system_pods.go:61] "coredns-66bc5c9577-8lhj6" [03c61e8c-3113-46be-8b73-5049e4a8a8c1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 05:20:23.654997   34771 system_pods.go:61] "etcd-test-preload-992301" [35f5fbb7-8763-443f-afa4-66df365fc2b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:20:23.655009   34771 system_pods.go:61] "kube-apiserver-test-preload-992301" [d910a2ec-8449-4c42-96df-bcdeb1b205e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 05:20:23.655017   34771 system_pods.go:61] "kube-controller-manager-test-preload-992301" [e986fff2-5c86-435e-8fc0-81e39975ca04] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:20:23.655025   34771 system_pods.go:61] "kube-proxy-5wh44" [abb7b43d-d552-4e42-a487-6e44723ce7dc] Running
	I1216 05:20:23.655034   34771 system_pods.go:61] "kube-scheduler-test-preload-992301" [2cde6319-6f46-48e3-b35d-a7bc411d2805] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:20:23.655045   34771 system_pods.go:61] "storage-provisioner" [af87e011-b988-44a3-a5f4-c3a5aa94b813] Running
	I1216 05:20:23.655053   34771 system_pods.go:74] duration metric: took 6.572209ms to wait for pod list to return data ...
	I1216 05:20:23.655063   34771 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:20:23.658888   34771 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 05:20:23.658921   34771 node_conditions.go:123] node cpu capacity is 2
	I1216 05:20:23.658939   34771 node_conditions.go:105] duration metric: took 3.87124ms to run NodePressure ...
	I1216 05:20:23.659003   34771 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 05:20:23.922662   34771 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1216 05:20:23.927598   34771 kubeadm.go:744] kubelet initialised
	I1216 05:20:23.927623   34771 kubeadm.go:745] duration metric: took 4.934221ms waiting for restarted kubelet to initialise ...
	I1216 05:20:23.927656   34771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 05:20:23.944434   34771 ops.go:34] apiserver oom_adj: -16
	I1216 05:20:23.944457   34771 kubeadm.go:602] duration metric: took 8.664229579s to restartPrimaryControlPlane
	I1216 05:20:23.944470   34771 kubeadm.go:403] duration metric: took 8.717121872s to StartCluster
	I1216 05:20:23.944493   34771 settings.go:142] acquiring lock: {Name:mk934ce4e0f52c59044080dacae6bea8d1937fab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:20:23.944578   34771 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 05:20:23.945130   34771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/kubeconfig: {Name:mk2e0aa2a9ecd47e0407b52e183f6fd294eb595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:20:23.945410   34771 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:20:23.945476   34771 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:20:23.945581   34771 addons.go:70] Setting storage-provisioner=true in profile "test-preload-992301"
	I1216 05:20:23.945612   34771 addons.go:239] Setting addon storage-provisioner=true in "test-preload-992301"
	I1216 05:20:23.945611   34771 addons.go:70] Setting default-storageclass=true in profile "test-preload-992301"
	W1216 05:20:23.945624   34771 addons.go:248] addon storage-provisioner should already be in state true
	I1216 05:20:23.945628   34771 config.go:182] Loaded profile config "test-preload-992301": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:20:23.945658   34771 host.go:66] Checking if "test-preload-992301" exists ...
	I1216 05:20:23.945637   34771 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-992301"
	I1216 05:20:23.947177   34771 out.go:179] * Verifying Kubernetes components...
	I1216 05:20:23.947989   34771 kapi.go:59] client config for test-preload-992301: &rest.Config{Host:"https://192.168.39.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/client.key", CAFile:"/home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:20:23.948247   34771 addons.go:239] Setting addon default-storageclass=true in "test-preload-992301"
	W1216 05:20:23.948264   34771 addons.go:248] addon default-storageclass should already be in state true
	I1216 05:20:23.948292   34771 host.go:66] Checking if "test-preload-992301" exists ...
	I1216 05:20:23.948657   34771 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:20:23.948696   34771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:20:23.949863   34771 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:20:23.949886   34771 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:20:23.950046   34771 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:20:23.950063   34771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:20:23.952531   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:23.952826   34771 main.go:143] libmachine: domain test-preload-992301 has defined MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:23.952890   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:23.952920   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:23.953042   34771 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/test-preload-992301/id_rsa Username:docker}
	I1216 05:20:23.953330   34771 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:4b:38", ip: ""} in network mk-test-preload-992301: {Iface:virbr1 ExpiryTime:2025-12-16 06:20:06 +0000 UTC Type:0 Mac:52:54:00:2e:4b:38 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:test-preload-992301 Clientid:01:52:54:00:2e:4b:38}
	I1216 05:20:23.953352   34771 main.go:143] libmachine: domain test-preload-992301 has defined IP address 192.168.39.195 and MAC address 52:54:00:2e:4b:38 in network mk-test-preload-992301
	I1216 05:20:23.953548   34771 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/test-preload-992301/id_rsa Username:docker}
	I1216 05:20:24.158896   34771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:20:24.180874   34771 node_ready.go:35] waiting up to 6m0s for node "test-preload-992301" to be "Ready" ...
	I1216 05:20:24.231721   34771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:20:24.331713   34771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:20:25.072383   34771 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1216 05:20:25.073942   34771 addons.go:530] duration metric: took 1.128471543s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1216 05:20:26.185074   34771 node_ready.go:57] node "test-preload-992301" has "Ready":"False" status (will retry)
	W1216 05:20:28.685217   34771 node_ready.go:57] node "test-preload-992301" has "Ready":"False" status (will retry)
	W1216 05:20:30.688702   34771 node_ready.go:57] node "test-preload-992301" has "Ready":"False" status (will retry)
	I1216 05:20:32.685199   34771 node_ready.go:49] node "test-preload-992301" is "Ready"
	I1216 05:20:32.685237   34771 node_ready.go:38] duration metric: took 8.504311189s for node "test-preload-992301" to be "Ready" ...
	I1216 05:20:32.685253   34771 api_server.go:52] waiting for apiserver process to appear ...
	I1216 05:20:32.685315   34771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:20:32.706883   34771 api_server.go:72] duration metric: took 8.761440414s to wait for apiserver process to appear ...
	I1216 05:20:32.706915   34771 api_server.go:88] waiting for apiserver healthz status ...
	I1216 05:20:32.706936   34771 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I1216 05:20:32.712121   34771 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I1216 05:20:32.713234   34771 api_server.go:141] control plane version: v1.34.2
	I1216 05:20:32.713264   34771 api_server.go:131] duration metric: took 6.340597ms to wait for apiserver health ...
	I1216 05:20:32.713275   34771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 05:20:32.717164   34771 system_pods.go:59] 7 kube-system pods found
	I1216 05:20:32.717193   34771 system_pods.go:61] "coredns-66bc5c9577-8lhj6" [03c61e8c-3113-46be-8b73-5049e4a8a8c1] Running
	I1216 05:20:32.717201   34771 system_pods.go:61] "etcd-test-preload-992301" [35f5fbb7-8763-443f-afa4-66df365fc2b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:20:32.717206   34771 system_pods.go:61] "kube-apiserver-test-preload-992301" [d910a2ec-8449-4c42-96df-bcdeb1b205e5] Running
	I1216 05:20:32.717213   34771 system_pods.go:61] "kube-controller-manager-test-preload-992301" [e986fff2-5c86-435e-8fc0-81e39975ca04] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:20:32.717219   34771 system_pods.go:61] "kube-proxy-5wh44" [abb7b43d-d552-4e42-a487-6e44723ce7dc] Running
	I1216 05:20:32.717225   34771 system_pods.go:61] "kube-scheduler-test-preload-992301" [2cde6319-6f46-48e3-b35d-a7bc411d2805] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:20:32.717229   34771 system_pods.go:61] "storage-provisioner" [af87e011-b988-44a3-a5f4-c3a5aa94b813] Running
	I1216 05:20:32.717235   34771 system_pods.go:74] duration metric: took 3.954756ms to wait for pod list to return data ...
	I1216 05:20:32.717244   34771 default_sa.go:34] waiting for default service account to be created ...
	I1216 05:20:32.720430   34771 default_sa.go:45] found service account: "default"
	I1216 05:20:32.720453   34771 default_sa.go:55] duration metric: took 3.204085ms for default service account to be created ...
	I1216 05:20:32.720462   34771 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 05:20:32.723678   34771 system_pods.go:86] 7 kube-system pods found
	I1216 05:20:32.723705   34771 system_pods.go:89] "coredns-66bc5c9577-8lhj6" [03c61e8c-3113-46be-8b73-5049e4a8a8c1] Running
	I1216 05:20:32.723714   34771 system_pods.go:89] "etcd-test-preload-992301" [35f5fbb7-8763-443f-afa4-66df365fc2b0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 05:20:32.723719   34771 system_pods.go:89] "kube-apiserver-test-preload-992301" [d910a2ec-8449-4c42-96df-bcdeb1b205e5] Running
	I1216 05:20:32.723727   34771 system_pods.go:89] "kube-controller-manager-test-preload-992301" [e986fff2-5c86-435e-8fc0-81e39975ca04] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 05:20:32.723731   34771 system_pods.go:89] "kube-proxy-5wh44" [abb7b43d-d552-4e42-a487-6e44723ce7dc] Running
	I1216 05:20:32.723736   34771 system_pods.go:89] "kube-scheduler-test-preload-992301" [2cde6319-6f46-48e3-b35d-a7bc411d2805] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 05:20:32.723740   34771 system_pods.go:89] "storage-provisioner" [af87e011-b988-44a3-a5f4-c3a5aa94b813] Running
	I1216 05:20:32.723747   34771 system_pods.go:126] duration metric: took 3.279667ms to wait for k8s-apps to be running ...
	I1216 05:20:32.723753   34771 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 05:20:32.723814   34771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:20:32.742005   34771 system_svc.go:56] duration metric: took 18.239747ms WaitForService to wait for kubelet
	I1216 05:20:32.742035   34771 kubeadm.go:587] duration metric: took 8.796596687s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 05:20:32.742052   34771 node_conditions.go:102] verifying NodePressure condition ...
	I1216 05:20:32.745381   34771 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 05:20:32.745409   34771 node_conditions.go:123] node cpu capacity is 2
	I1216 05:20:32.745419   34771 node_conditions.go:105] duration metric: took 3.363103ms to run NodePressure ...
	I1216 05:20:32.745430   34771 start.go:242] waiting for startup goroutines ...
	I1216 05:20:32.745437   34771 start.go:247] waiting for cluster config update ...
	I1216 05:20:32.745447   34771 start.go:256] writing updated cluster config ...
	I1216 05:20:32.745726   34771 ssh_runner.go:195] Run: rm -f paused
	I1216 05:20:32.751561   34771 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:20:32.751993   34771 kapi.go:59] client config for test-preload-992301: &rest.Config{Host:"https://192.168.39.195:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/client.crt", KeyFile:"/home/jenkins/minikube-integration/22141-5059/.minikube/profiles/test-preload-992301/client.key", CAFile:"/home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 05:20:32.755595   34771 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8lhj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:32.761343   34771 pod_ready.go:94] pod "coredns-66bc5c9577-8lhj6" is "Ready"
	I1216 05:20:32.761377   34771 pod_ready.go:86] duration metric: took 5.758119ms for pod "coredns-66bc5c9577-8lhj6" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:32.763965   34771 pod_ready.go:83] waiting for pod "etcd-test-preload-992301" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 05:20:34.771137   34771 pod_ready.go:104] pod "etcd-test-preload-992301" is not "Ready", error: <nil>
	W1216 05:20:36.771983   34771 pod_ready.go:104] pod "etcd-test-preload-992301" is not "Ready", error: <nil>
	I1216 05:20:37.772416   34771 pod_ready.go:94] pod "etcd-test-preload-992301" is "Ready"
	I1216 05:20:37.772440   34771 pod_ready.go:86] duration metric: took 5.008445256s for pod "etcd-test-preload-992301" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:37.774714   34771 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-992301" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:37.779816   34771 pod_ready.go:94] pod "kube-apiserver-test-preload-992301" is "Ready"
	I1216 05:20:37.779851   34771 pod_ready.go:86] duration metric: took 5.112712ms for pod "kube-apiserver-test-preload-992301" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:37.782440   34771 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-992301" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:37.786818   34771 pod_ready.go:94] pod "kube-controller-manager-test-preload-992301" is "Ready"
	I1216 05:20:37.786842   34771 pod_ready.go:86] duration metric: took 4.380888ms for pod "kube-controller-manager-test-preload-992301" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:37.788946   34771 pod_ready.go:83] waiting for pod "kube-proxy-5wh44" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:37.968630   34771 pod_ready.go:94] pod "kube-proxy-5wh44" is "Ready"
	I1216 05:20:37.968655   34771 pod_ready.go:86] duration metric: took 179.692173ms for pod "kube-proxy-5wh44" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:38.167866   34771 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-992301" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:38.568527   34771 pod_ready.go:94] pod "kube-scheduler-test-preload-992301" is "Ready"
	I1216 05:20:38.568560   34771 pod_ready.go:86] duration metric: took 400.66795ms for pod "kube-scheduler-test-preload-992301" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 05:20:38.568576   34771 pod_ready.go:40] duration metric: took 5.816981318s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 05:20:38.612021   34771 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 05:20:38.613886   34771 out.go:179] * Done! kubectl is now configured to use "test-preload-992301" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.449012071Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6552ecc2-45d6-4af8-b4be-47e08ea0ddd8 name=/runtime.v1.RuntimeService/Version
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.450849012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ffc4847-7d27-463e-ade1-7339b67e058b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.451307811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765862439451282224,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ffc4847-7d27-463e-ade1-7339b67e058b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.452414884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62d51e4d-76f6-4465-a491-e31bbfeb9ca2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.452546465Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62d51e4d-76f6-4465-a491-e31bbfeb9ca2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.452873081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3ddb5a9440039e67b6e3a62c3e2e9bf31b099e5b5002b49dcbb4130ad8c8a74,PodSandboxId:be750559bdf015816db9d31f1fe931fc2d68d043afef2961f5ebd5049bf50f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765862430528481334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8lhj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61e8c-3113-46be-8b73-5049e4a8a8c1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e984f84317d5f9b2621f4af35a16e43c58d242f22d30cd885db1d3505a648a98,PodSandboxId:34d3a337d39a79711d0c3a083f05d4e74ff6ba674ef09f233801258c9eefab61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765862422855017490,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5wh44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb7b43d-d552-4e42-a487-6e44723ce7dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f582e881673204a4464fb7bc1bca39e1d7e7b8f90dd78716cb96be40affbe79b,PodSandboxId:a33d99c44bf6d91f2ed314f34563a5e6433267a9e90cc5392c3fa2f20fcf5b1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765862422856901817,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87e011-b988-44a3-a5f4-c3a5aa94b813,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cad2d30b29214e43c8d3255cff7194c2626e6770d1d908a74c41a4c7ba834a3a,PodSandboxId:45a8614e614c0cc12f72a580169f5aa63912779cab0503543d095c4bd4c515bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765862419168311364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee4e3e8a2029e8b86246d88b102a343,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb37c0b370bd1c18c82cb0bfa7fbeac99707eb146f06606d15b3a31e6b0d4a8,PodSandboxId:4baf3851bc68088e37cad642f5e88691e09206f6d08c1cbdeddc68dc37ddb332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1765862419137870278,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec55ea4d202568fd92340ee2a47b0ba,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaf05d935c37fd520f91e3abd5ff3cf9cf952ae7d87325d263b4e5ac3c4c759,PodSandboxId:d6e364bf6fba0a518ba52912cf81eb57a50c13cd3d780ec97766930fa2e3ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765862419143920903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fe64ed6ca561050ea2272c94b80259,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e01182816a802f75f06266e05e77d5a9ee6ac8c0c2c6845bc77e916209d542,PodSandboxId:64912a6a3945d5ee3c642826c284a7d3d28a5dee034bf072781e38cd7c82fea8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765862419106367080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d513485dbe66f34245810b94dfe7542,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62d51e4d-76f6-4465-a491-e31bbfeb9ca2 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.461766437Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c974fdcd-2f17-48ca-9ad2-f73ff1d6a4a2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.462547286Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:be750559bdf015816db9d31f1fe931fc2d68d043afef2961f5ebd5049bf50f0b,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-8lhj6,Uid:03c61e8c-3113-46be-8b73-5049e4a8a8c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765862430271990147,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-8lhj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61e8c-3113-46be-8b73-5049e4a8a8c1,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-16T05:20:22.409756082Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a33d99c44bf6d91f2ed314f34563a5e6433267a9e90cc5392c3fa2f20fcf5b1f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:af87e011-b988-44a3-a5f4-c3a5aa94b813,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765862422729765819,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87e011-b988-44a3-a5f4-c3a5aa94b813,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-16T05:20:22.409754619Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34d3a337d39a79711d0c3a083f05d4e74ff6ba674ef09f233801258c9eefab61,Metadata:&PodSandboxMetadata{Name:kube-proxy-5wh44,Uid:abb7b43d-d552-4e42-a487-6e44723ce7dc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765862422725889626,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5wh44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb7b43d-d552-4e42-a487-6e44723ce7dc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-16T05:20:22.409752589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45a8614e614c0cc12f72a580169f5aa63912779cab0503543d095c4bd4c515bc,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-992301,Uid:aee4e3e8a2029e8b8
6246d88b102a343,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765862418856394611,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee4e3e8a2029e8b86246d88b102a343,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.195:2379,kubernetes.io/config.hash: aee4e3e8a2029e8b86246d88b102a343,kubernetes.io/config.seen: 2025-12-16T05:20:17.488841163Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64912a6a3945d5ee3c642826c284a7d3d28a5dee034bf072781e38cd7c82fea8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-992301,Uid:5d513485dbe66f34245810b94dfe7542,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765862418849184902,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube
-controller-manager-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d513485dbe66f34245810b94dfe7542,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5d513485dbe66f34245810b94dfe7542,kubernetes.io/config.seen: 2025-12-16T05:20:17.411175232Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d6e364bf6fba0a518ba52912cf81eb57a50c13cd3d780ec97766930fa2e3ba18,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-992301,Uid:47fe64ed6ca561050ea2272c94b80259,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765862418833524194,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fe64ed6ca561050ea2272c94b80259,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.195:8443,kubernetes.io/c
onfig.hash: 47fe64ed6ca561050ea2272c94b80259,kubernetes.io/config.seen: 2025-12-16T05:20:17.411173957Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4baf3851bc68088e37cad642f5e88691e09206f6d08c1cbdeddc68dc37ddb332,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-992301,Uid:2ec55ea4d202568fd92340ee2a47b0ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765862418832600803,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec55ea4d202568fd92340ee2a47b0ba,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2ec55ea4d202568fd92340ee2a47b0ba,kubernetes.io/config.seen: 2025-12-16T05:20:17.411154882Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c974fdcd-2f17-48ca-9ad2-f73ff1d6a4a2 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.463503874Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a915b463-be3a-4be8-a03a-32ac60fa5491 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.463582886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a915b463-be3a-4be8-a03a-32ac60fa5491 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.463794661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3ddb5a9440039e67b6e3a62c3e2e9bf31b099e5b5002b49dcbb4130ad8c8a74,PodSandboxId:be750559bdf015816db9d31f1fe931fc2d68d043afef2961f5ebd5049bf50f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765862430528481334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8lhj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61e8c-3113-46be-8b73-5049e4a8a8c1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e984f84317d5f9b2621f4af35a16e43c58d242f22d30cd885db1d3505a648a98,PodSandboxId:34d3a337d39a79711d0c3a083f05d4e74ff6ba674ef09f233801258c9eefab61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765862422855017490,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5wh44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb7b43d-d552-4e42-a487-6e44723ce7dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f582e881673204a4464fb7bc1bca39e1d7e7b8f90dd78716cb96be40affbe79b,PodSandboxId:a33d99c44bf6d91f2ed314f34563a5e6433267a9e90cc5392c3fa2f20fcf5b1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765862422856901817,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87e011-b988-44a3-a5f4-c3a5aa94b813,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cad2d30b29214e43c8d3255cff7194c2626e6770d1d908a74c41a4c7ba834a3a,PodSandboxId:45a8614e614c0cc12f72a580169f5aa63912779cab0503543d095c4bd4c515bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765862419168311364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee4e3e8a2029e8b86246d88b102a343,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb37c0b370bd1c18c82cb0bfa7fbeac99707eb146f06606d15b3a31e6b0d4a8,PodSandboxId:4baf3851bc68088e37cad642f5e88691e09206f6d08c1cbdeddc68dc37ddb332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1765862419137870278,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec55ea4d202568fd92340ee2a47b0ba,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaf05d935c37fd520f91e3abd5ff3cf9cf952ae7d87325d263b4e5ac3c4c759,PodSandboxId:d6e364bf6fba0a518ba52912cf81eb57a50c13cd3d780ec97766930fa2e3ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765862419143920903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fe64ed6ca561050ea2272c94b80259,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e01182816a802f75f06266e05e77d5a9ee6ac8c0c2c6845bc77e916209d542,PodSandboxId:64912a6a3945d5ee3c642826c284a7d3d28a5dee034bf072781e38cd7c82fea8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765862419106367080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d513485dbe66f34245810b94dfe7542,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a915b463-be3a-4be8-a03a-32ac60fa5491 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.489508963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83dd1931-54ed-44f5-8dac-fd5371964925 name=/runtime.v1.RuntimeService/Version
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.489596996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83dd1931-54ed-44f5-8dac-fd5371964925 name=/runtime.v1.RuntimeService/Version
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.491211390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a66f98ff-e89c-4052-bf11-d1564faac4c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.491601851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765862439491576417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a66f98ff-e89c-4052-bf11-d1564faac4c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.492621040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd9dcdeb-3309-40a5-8e31-b92ded4f92a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.492734506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd9dcdeb-3309-40a5-8e31-b92ded4f92a5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.493080688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3ddb5a9440039e67b6e3a62c3e2e9bf31b099e5b5002b49dcbb4130ad8c8a74,PodSandboxId:be750559bdf015816db9d31f1fe931fc2d68d043afef2961f5ebd5049bf50f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765862430528481334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8lhj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61e8c-3113-46be-8b73-5049e4a8a8c1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e984f84317d5f9b2621f4af35a16e43c58d242f22d30cd885db1d3505a648a98,PodSandboxId:34d3a337d39a79711d0c3a083f05d4e74ff6ba674ef09f233801258c9eefab61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765862422855017490,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5wh44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb7b43d-d552-4e42-a487-6e44723ce7dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f582e881673204a4464fb7bc1bca39e1d7e7b8f90dd78716cb96be40affbe79b,PodSandboxId:a33d99c44bf6d91f2ed314f34563a5e6433267a9e90cc5392c3fa2f20fcf5b1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765862422856901817,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87e011-b988-44a3-a5f4-c3a5aa94b813,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cad2d30b29214e43c8d3255cff7194c2626e6770d1d908a74c41a4c7ba834a3a,PodSandboxId:45a8614e614c0cc12f72a580169f5aa63912779cab0503543d095c4bd4c515bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765862419168311364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee4e3e8a2029e8b86246d88b102a343,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb37c0b370bd1c18c82cb0bfa7fbeac99707eb146f06606d15b3a31e6b0d4a8,PodSandboxId:4baf3851bc68088e37cad642f5e88691e09206f6d08c1cbdeddc68dc37ddb332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1765862419137870278,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec55ea4d202568fd92340ee2a47b0ba,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaf05d935c37fd520f91e3abd5ff3cf9cf952ae7d87325d263b4e5ac3c4c759,PodSandboxId:d6e364bf6fba0a518ba52912cf81eb57a50c13cd3d780ec97766930fa2e3ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765862419143920903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fe64ed6ca561050ea2272c94b80259,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e01182816a802f75f06266e05e77d5a9ee6ac8c0c2c6845bc77e916209d542,PodSandboxId:64912a6a3945d5ee3c642826c284a7d3d28a5dee034bf072781e38cd7c82fea8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765862419106367080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d513485dbe66f34245810b94dfe7542,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd9dcdeb-3309-40a5-8e31-b92ded4f92a5 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.526474098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22b833ca-c14c-49d2-9b5c-2e2aa7ca91e6 name=/runtime.v1.RuntimeService/Version
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.526577927Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22b833ca-c14c-49d2-9b5c-2e2aa7ca91e6 name=/runtime.v1.RuntimeService/Version
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.529237769Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be01a802-fcdf-4edf-9c31-d8f75fd908f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.529770316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765862439529642107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be01a802-fcdf-4edf-9c31-d8f75fd908f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.531059527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a34a4037-2d26-4bd1-ac1c-9217cc64c1df name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.531413192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a34a4037-2d26-4bd1-ac1c-9217cc64c1df name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 05:20:39 test-preload-992301 crio[836]: time="2025-12-16 05:20:39.531613588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3ddb5a9440039e67b6e3a62c3e2e9bf31b099e5b5002b49dcbb4130ad8c8a74,PodSandboxId:be750559bdf015816db9d31f1fe931fc2d68d043afef2961f5ebd5049bf50f0b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765862430528481334,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-8lhj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c61e8c-3113-46be-8b73-5049e4a8a8c1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e984f84317d5f9b2621f4af35a16e43c58d242f22d30cd885db1d3505a648a98,PodSandboxId:34d3a337d39a79711d0c3a083f05d4e74ff6ba674ef09f233801258c9eefab61,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765862422855017490,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5wh44,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abb7b43d-d552-4e42-a487-6e44723ce7dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f582e881673204a4464fb7bc1bca39e1d7e7b8f90dd78716cb96be40affbe79b,PodSandboxId:a33d99c44bf6d91f2ed314f34563a5e6433267a9e90cc5392c3fa2f20fcf5b1f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765862422856901817,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87e011-b988-44a3-a5f4-c3a5aa94b813,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cad2d30b29214e43c8d3255cff7194c2626e6770d1d908a74c41a4c7ba834a3a,PodSandboxId:45a8614e614c0cc12f72a580169f5aa63912779cab0503543d095c4bd4c515bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765862419168311364,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aee4e3e8a2029e8b86246d88b102a343,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bb37c0b370bd1c18c82cb0bfa7fbeac99707eb146f06606d15b3a31e6b0d4a8,PodSandboxId:4baf3851bc68088e37cad642f5e88691e09206f6d08c1cbdeddc68dc37ddb332,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,Crea
tedAt:1765862419137870278,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ec55ea4d202568fd92340ee2a47b0ba,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcaf05d935c37fd520f91e3abd5ff3cf9cf952ae7d87325d263b4e5ac3c4c759,PodSandboxId:d6e364bf6fba0a518ba52912cf81eb57a50c13cd3d780ec97766930fa2e3ba18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765862419143920903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fe64ed6ca561050ea2272c94b80259,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4e01182816a802f75f06266e05e77d5a9ee6ac8c0c2c6845bc77e916209d542,PodSandboxId:64912a6a3945d5ee3c642826c284a7d3d28a5dee034bf072781e38cd7c82fea8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765862419106367080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-992301,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d513485dbe66f34245810b94dfe7542,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a34a4037-2d26-4bd1-ac1c-9217cc64c1df name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	f3ddb5a944003       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 seconds ago       Running             coredns                   1                   be750559bdf01       coredns-66bc5c9577-8lhj6                      kube-system
	f582e88167320       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       2                   a33d99c44bf6d       storage-provisioner                           kube-system
	e984f84317d5f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                1                   34d3a337d39a7       kube-proxy-5wh44                              kube-system
	cad2d30b29214       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      1                   45a8614e614c0       etcd-test-preload-992301                      kube-system
	fcaf05d935c37       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago      Running             kube-apiserver            1                   d6e364bf6fba0       kube-apiserver-test-preload-992301            kube-system
	9bb37c0b370bd       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago      Running             kube-scheduler            1                   4baf3851bc680       kube-scheduler-test-preload-992301            kube-system
	d4e01182816a8       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago      Running             kube-controller-manager   1                   64912a6a3945d       kube-controller-manager-test-preload-992301   kube-system
	
	
	==> coredns [f3ddb5a9440039e67b6e3a62c3e2e9bf31b099e5b5002b49dcbb4130ad8c8a74] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33104 - 24639 "HINFO IN 5510981228480725638.2681774116947258538. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046590345s
	
	
	==> describe nodes <==
	Name:               test-preload-992301
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-992301
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=54c60a371d0e7275c67570df7629148966e8126f
	                    minikube.k8s.io/name=test-preload-992301
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T05_19_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 05:18:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-992301
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 05:20:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 05:20:32 +0000   Tue, 16 Dec 2025 05:18:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 05:20:32 +0000   Tue, 16 Dec 2025 05:18:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 05:20:32 +0000   Tue, 16 Dec 2025 05:18:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 05:20:32 +0000   Tue, 16 Dec 2025 05:20:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    test-preload-992301
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 c40e4ac1f1ce40899e5198c94c5db49e
	  System UUID:                c40e4ac1-f1ce-4089-9e51-98c94c5db49e
	  Boot ID:                    90cb0b0a-71fa-4cd0-a4b5-ea21c841a350
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-8lhj6                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     93s
	  kube-system                 etcd-test-preload-992301                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         98s
	  kube-system                 kube-apiserver-test-preload-992301             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-test-preload-992301    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-5wh44                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-test-preload-992301             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 91s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientMemory  98s                kubelet          Node test-preload-992301 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    98s                kubelet          Node test-preload-992301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s                kubelet          Node test-preload-992301 status is now: NodeHasSufficientPID
	  Normal   Starting                 98s                kubelet          Starting kubelet.
	  Normal   NodeReady                97s                kubelet          Node test-preload-992301 status is now: NodeReady
	  Normal   RegisteredNode           94s                node-controller  Node test-preload-992301 event: Registered Node test-preload-992301 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-992301 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-992301 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-992301 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                kubelet          Node test-preload-992301 has been rebooted, boot id: 90cb0b0a-71fa-4cd0-a4b5-ea21c841a350
	  Normal   RegisteredNode           14s                node-controller  Node test-preload-992301 event: Registered Node test-preload-992301 in Controller
	
	
	==> dmesg <==
	[Dec16 05:19] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000000] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec16 05:20] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007045] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.893971] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.109627] kauditd_printk_skb: 88 callbacks suppressed
	[  +5.608831] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.000027] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.041141] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [cad2d30b29214e43c8d3255cff7194c2626e6770d1d908a74c41a4c7ba834a3a] <==
	{"level":"warn","ts":"2025-12-16T05:20:20.963426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:20.983784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:20.997733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.012372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.022745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.032969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.048222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.059812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.074423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.090606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.107320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.123234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.158486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.167567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.180884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.198155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.216540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.235333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.262392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.270779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.325905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.335953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.353164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.373352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T05:20:21.436719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46006","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 05:20:39 up 0 min,  0 users,  load average: 0.71, 0.20, 0.07
	Linux test-preload-992301 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fcaf05d935c37fd520f91e3abd5ff3cf9cf952ae7d87325d263b4e5ac3c4c759] <==
	I1216 05:20:22.289899       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 05:20:22.289924       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 05:20:22.290025       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 05:20:22.291758       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1216 05:20:22.292768       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 05:20:22.294740       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 05:20:22.294782       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 05:20:22.294819       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 05:20:22.298876       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 05:20:22.301829       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1216 05:20:22.303301       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 05:20:22.309504       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 05:20:22.323362       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 05:20:22.323500       1 policy_source.go:240] refreshing policies
	I1216 05:20:22.329102       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 05:20:22.336927       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 05:20:22.539467       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 05:20:23.100158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 05:20:23.743956       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 05:20:23.784862       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 05:20:23.821539       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 05:20:23.829364       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 05:20:25.641911       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1216 05:20:25.973386       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 05:20:26.023183       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [d4e01182816a802f75f06266e05e77d5a9ee6ac8c0c2c6845bc77e916209d542] <==
	I1216 05:20:25.636439       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 05:20:25.645099       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 05:20:25.653646       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:20:25.660892       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 05:20:25.661274       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 05:20:25.661455       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 05:20:25.661563       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 05:20:25.661587       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 05:20:25.664373       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 05:20:25.664419       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 05:20:25.664426       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 05:20:25.669743       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1216 05:20:25.669822       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 05:20:25.669837       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 05:20:25.669847       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 05:20:25.669856       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 05:20:25.669874       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 05:20:25.674430       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 05:20:25.675980       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 05:20:25.681828       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1216 05:20:25.683063       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 05:20:25.683313       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 05:20:25.685758       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 05:20:25.693630       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 05:20:35.621950       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e984f84317d5f9b2621f4af35a16e43c58d242f22d30cd885db1d3505a648a98] <==
	I1216 05:20:23.064916       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 05:20:23.165818       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 05:20:23.165859       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.195"]
	E1216 05:20:23.165951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 05:20:23.243906       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1216 05:20:23.244026       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 05:20:23.244071       1 server_linux.go:132] "Using iptables Proxier"
	I1216 05:20:23.259408       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 05:20:23.259818       1 server.go:527] "Version info" version="v1.34.2"
	I1216 05:20:23.259869       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:20:23.270340       1 config.go:200] "Starting service config controller"
	I1216 05:20:23.270412       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 05:20:23.270443       1 config.go:106] "Starting endpoint slice config controller"
	I1216 05:20:23.270458       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 05:20:23.270479       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 05:20:23.270492       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 05:20:23.271438       1 config.go:309] "Starting node config controller"
	I1216 05:20:23.271492       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 05:20:23.271509       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 05:20:23.371129       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 05:20:23.371178       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 05:20:23.371212       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9bb37c0b370bd1c18c82cb0bfa7fbeac99707eb146f06606d15b3a31e6b0d4a8] <==
	I1216 05:20:20.394026       1 serving.go:386] Generated self-signed cert in-memory
	W1216 05:20:22.208322       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 05:20:22.208361       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 05:20:22.208374       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 05:20:22.208402       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 05:20:22.252594       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 05:20:22.252642       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 05:20:22.257103       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:20:22.257195       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 05:20:22.257210       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 05:20:22.257305       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 05:20:22.357846       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: I1216 05:20:22.385403    1190 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-992301"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: E1216 05:20:22.394606    1190 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-992301\" already exists" pod="kube-system/kube-apiserver-test-preload-992301"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: I1216 05:20:22.405942    1190 apiserver.go:52] "Watching apiserver"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: E1216 05:20:22.412879    1190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-8lhj6" podUID="03c61e8c-3113-46be-8b73-5049e4a8a8c1"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: I1216 05:20:22.436888    1190 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: E1216 05:20:22.520428    1190 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: I1216 05:20:22.523810    1190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abb7b43d-d552-4e42-a487-6e44723ce7dc-xtables-lock\") pod \"kube-proxy-5wh44\" (UID: \"abb7b43d-d552-4e42-a487-6e44723ce7dc\") " pod="kube-system/kube-proxy-5wh44"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: I1216 05:20:22.523867    1190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abb7b43d-d552-4e42-a487-6e44723ce7dc-lib-modules\") pod \"kube-proxy-5wh44\" (UID: \"abb7b43d-d552-4e42-a487-6e44723ce7dc\") " pod="kube-system/kube-proxy-5wh44"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: I1216 05:20:22.523915    1190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/af87e011-b988-44a3-a5f4-c3a5aa94b813-tmp\") pod \"storage-provisioner\" (UID: \"af87e011-b988-44a3-a5f4-c3a5aa94b813\") " pod="kube-system/storage-provisioner"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: E1216 05:20:22.524238    1190 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: E1216 05:20:22.524399    1190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/03c61e8c-3113-46be-8b73-5049e4a8a8c1-config-volume podName:03c61e8c-3113-46be-8b73-5049e4a8a8c1 nodeName:}" failed. No retries permitted until 2025-12-16 05:20:23.024353194 +0000 UTC m=+5.722355860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03c61e8c-3113-46be-8b73-5049e4a8a8c1-config-volume") pod "coredns-66bc5c9577-8lhj6" (UID: "03c61e8c-3113-46be-8b73-5049e4a8a8c1") : object "kube-system"/"coredns" not registered
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: I1216 05:20:22.553992    1190 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-992301"
	Dec 16 05:20:22 test-preload-992301 kubelet[1190]: E1216 05:20:22.576980    1190 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-992301\" already exists" pod="kube-system/etcd-test-preload-992301"
	Dec 16 05:20:23 test-preload-992301 kubelet[1190]: E1216 05:20:23.029508    1190 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 05:20:23 test-preload-992301 kubelet[1190]: E1216 05:20:23.029596    1190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/03c61e8c-3113-46be-8b73-5049e4a8a8c1-config-volume podName:03c61e8c-3113-46be-8b73-5049e4a8a8c1 nodeName:}" failed. No retries permitted until 2025-12-16 05:20:24.029582966 +0000 UTC m=+6.727585620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03c61e8c-3113-46be-8b73-5049e4a8a8c1-config-volume") pod "coredns-66bc5c9577-8lhj6" (UID: "03c61e8c-3113-46be-8b73-5049e4a8a8c1") : object "kube-system"/"coredns" not registered
	Dec 16 05:20:24 test-preload-992301 kubelet[1190]: E1216 05:20:24.040624    1190 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 05:20:24 test-preload-992301 kubelet[1190]: E1216 05:20:24.040808    1190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/03c61e8c-3113-46be-8b73-5049e4a8a8c1-config-volume podName:03c61e8c-3113-46be-8b73-5049e4a8a8c1 nodeName:}" failed. No retries permitted until 2025-12-16 05:20:26.040791921 +0000 UTC m=+8.738794588 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03c61e8c-3113-46be-8b73-5049e4a8a8c1-config-volume") pod "coredns-66bc5c9577-8lhj6" (UID: "03c61e8c-3113-46be-8b73-5049e4a8a8c1") : object "kube-system"/"coredns" not registered
	Dec 16 05:20:24 test-preload-992301 kubelet[1190]: E1216 05:20:24.460994    1190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-8lhj6" podUID="03c61e8c-3113-46be-8b73-5049e4a8a8c1"
	Dec 16 05:20:26 test-preload-992301 kubelet[1190]: E1216 05:20:26.059331    1190 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 05:20:26 test-preload-992301 kubelet[1190]: E1216 05:20:26.059424    1190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/03c61e8c-3113-46be-8b73-5049e4a8a8c1-config-volume podName:03c61e8c-3113-46be-8b73-5049e4a8a8c1 nodeName:}" failed. No retries permitted until 2025-12-16 05:20:30.059409563 +0000 UTC m=+12.757412217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/03c61e8c-3113-46be-8b73-5049e4a8a8c1-config-volume") pod "coredns-66bc5c9577-8lhj6" (UID: "03c61e8c-3113-46be-8b73-5049e4a8a8c1") : object "kube-system"/"coredns" not registered
	Dec 16 05:20:26 test-preload-992301 kubelet[1190]: E1216 05:20:26.461065    1190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-8lhj6" podUID="03c61e8c-3113-46be-8b73-5049e4a8a8c1"
	Dec 16 05:20:27 test-preload-992301 kubelet[1190]: E1216 05:20:27.511385    1190 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765862427510123385 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 16 05:20:27 test-preload-992301 kubelet[1190]: E1216 05:20:27.511492    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765862427510123385 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 16 05:20:37 test-preload-992301 kubelet[1190]: E1216 05:20:37.513980    1190 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765862437513175118 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 16 05:20:37 test-preload-992301 kubelet[1190]: E1216 05:20:37.514020    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765862437513175118 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [f582e881673204a4464fb7bc1bca39e1d7e7b8f90dd78716cb96be40affbe79b] <==
	I1216 05:20:22.973052       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-992301 -n test-preload-992301
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-992301 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-992301" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-992301
--- FAIL: TestPreload (146.54s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-928970 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-928970 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (29.403804751s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-928970] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-928970" primary control-plane node in "pause-928970" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-928970" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:29:40.514053   43252 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:29:40.514301   43252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:29:40.514310   43252 out.go:374] Setting ErrFile to fd 2...
	I1216 05:29:40.514314   43252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:29:40.514575   43252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:29:40.515061   43252 out.go:368] Setting JSON to false
	I1216 05:29:40.515962   43252 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4322,"bootTime":1765858658,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:29:40.516014   43252 start.go:143] virtualization: kvm guest
	I1216 05:29:40.518310   43252 out.go:179] * [pause-928970] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:29:40.519733   43252 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:29:40.519789   43252 notify.go:221] Checking for updates...
	I1216 05:29:40.522275   43252 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:29:40.523485   43252 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 05:29:40.524676   43252 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 05:29:40.526078   43252 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:29:40.527287   43252 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:29:40.528871   43252 config.go:182] Loaded profile config "pause-928970": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:29:40.529388   43252 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:29:40.569719   43252 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 05:29:40.570943   43252 start.go:309] selected driver: kvm2
	I1216 05:29:40.570960   43252 start.go:927] validating driver "kvm2" against &{Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:29:40.571100   43252 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:29:40.571987   43252 cni.go:84] Creating CNI manager for ""
	I1216 05:29:40.572069   43252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 05:29:40.572137   43252 start.go:353] cluster config:
	{Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-928970 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:29:40.572266   43252 iso.go:125] acquiring lock: {Name:mk32a15185e6e6998579c2a7c92376b162445713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:29:40.574293   43252 out.go:179] * Starting "pause-928970" primary control-plane node in "pause-928970" cluster
	I1216 05:29:40.575250   43252 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:29:40.575285   43252 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:29:40.575298   43252 cache.go:65] Caching tarball of preloaded images
	I1216 05:29:40.575404   43252 preload.go:238] Found /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:29:40.575421   43252 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:29:40.575553   43252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/config.json ...
	I1216 05:29:40.575803   43252 start.go:360] acquireMachinesLock for pause-928970: {Name:mk62c9c2852efe4dee40756b90f6ebee1eabe893 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 05:29:40.575859   43252 start.go:364] duration metric: took 33.309µs to acquireMachinesLock for "pause-928970"
	I1216 05:29:40.575879   43252 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:29:40.575889   43252 fix.go:54] fixHost starting: 
	I1216 05:29:40.577685   43252 fix.go:112] recreateIfNeeded on pause-928970: state=Running err=<nil>
	W1216 05:29:40.577714   43252 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 05:29:40.579289   43252 out.go:252] * Updating the running kvm2 "pause-928970" VM ...
	I1216 05:29:40.579312   43252 machine.go:94] provisionDockerMachine start ...
	I1216 05:29:40.581965   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.582465   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:40.582500   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.582721   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:40.582924   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:40.582933   43252 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:29:40.707367   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-928970
	
	I1216 05:29:40.707396   43252 buildroot.go:166] provisioning hostname "pause-928970"
	I1216 05:29:40.711290   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.711905   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:40.711930   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.712101   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:40.712323   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:40.712336   43252 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-928970 && echo "pause-928970" | sudo tee /etc/hostname
	I1216 05:29:40.865287   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-928970
	
	I1216 05:29:40.869501   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.870023   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:40.870062   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.870287   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:40.870603   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:40.870640   43252 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-928970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-928970/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-928970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:29:41.005458   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:29:41.005500   43252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5059/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5059/.minikube}
	I1216 05:29:41.005567   43252 buildroot.go:174] setting up certificates
	I1216 05:29:41.005579   43252 provision.go:84] configureAuth start
	I1216 05:29:41.009160   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.009589   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.009633   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.012561   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.012963   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.012984   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.013148   43252 provision.go:143] copyHostCerts
	I1216 05:29:41.013205   43252 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem, removing ...
	I1216 05:29:41.013220   43252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem
	I1216 05:29:41.013290   43252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem (1082 bytes)
	I1216 05:29:41.013437   43252 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem, removing ...
	I1216 05:29:41.013450   43252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem
	I1216 05:29:41.013488   43252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem (1123 bytes)
	I1216 05:29:41.013570   43252 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem, removing ...
	I1216 05:29:41.013581   43252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem
	I1216 05:29:41.013611   43252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem (1675 bytes)
	I1216 05:29:41.013679   43252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem org=jenkins.pause-928970 san=[127.0.0.1 192.168.61.105 localhost minikube pause-928970]
	I1216 05:29:41.165551   43252 provision.go:177] copyRemoteCerts
	I1216 05:29:41.165617   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:29:41.168295   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.168654   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.168678   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.168841   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:41.264251   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:29:41.302576   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 05:29:41.337850   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:29:41.382261   43252 provision.go:87] duration metric: took 376.63928ms to configureAuth
	I1216 05:29:41.382297   43252 buildroot.go:189] setting minikube options for container-runtime
	I1216 05:29:41.382548   43252 config.go:182] Loaded profile config "pause-928970": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:29:41.386149   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.386674   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.386701   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.386990   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:41.387297   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:41.387321   43252 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 05:29:47.026203   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:29:47.026235   43252 machine.go:97] duration metric: took 6.446915161s to provisionDockerMachine
	I1216 05:29:47.026248   43252 start.go:293] postStartSetup for "pause-928970" (driver="kvm2")
	I1216 05:29:47.026258   43252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:29:47.026336   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:29:47.029231   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.029660   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.029684   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.029845   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:47.122482   43252 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:29:47.128221   43252 info.go:137] Remote host: Buildroot 2025.02
	I1216 05:29:47.128258   43252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/addons for local assets ...
	I1216 05:29:47.128335   43252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/files for local assets ...
	I1216 05:29:47.128465   43252 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem -> 89872.pem in /etc/ssl/certs
	I1216 05:29:47.128587   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:29:47.141501   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem --> /etc/ssl/certs/89872.pem (1708 bytes)
	I1216 05:29:47.172632   43252 start.go:296] duration metric: took 146.369932ms for postStartSetup
	I1216 05:29:47.172681   43252 fix.go:56] duration metric: took 6.596792796s for fixHost
	I1216 05:29:47.175870   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.176322   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.176359   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.176612   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:47.176893   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:47.176906   43252 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 05:29:47.299899   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765862987.294942981
	
	I1216 05:29:47.299926   43252 fix.go:216] guest clock: 1765862987.294942981
	I1216 05:29:47.299934   43252 fix.go:229] Guest: 2025-12-16 05:29:47.294942981 +0000 UTC Remote: 2025-12-16 05:29:47.172688027 +0000 UTC m=+6.714876738 (delta=122.254954ms)
	I1216 05:29:47.299950   43252 fix.go:200] guest clock delta is within tolerance: 122.254954ms
	I1216 05:29:47.299979   43252 start.go:83] releasing machines lock for "pause-928970", held for 6.724085047s
	I1216 05:29:47.303217   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.303706   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.303733   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.304254   43252 ssh_runner.go:195] Run: cat /version.json
	I1216 05:29:47.304373   43252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:29:47.307385   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.307620   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.307874   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.307903   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.308076   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.308080   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:47.308117   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.308296   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:47.396548   43252 ssh_runner.go:195] Run: systemctl --version
	I1216 05:29:47.423102   43252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:29:47.578195   43252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:29:47.588351   43252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:29:47.588429   43252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:29:47.600145   43252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:29:47.600180   43252 start.go:496] detecting cgroup driver to use...
	I1216 05:29:47.600256   43252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:29:47.622406   43252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:29:47.640566   43252 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:29:47.640626   43252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:29:47.662194   43252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:29:47.681343   43252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:29:47.868246   43252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:29:48.047906   43252 docker.go:234] disabling docker service ...
	I1216 05:29:48.047988   43252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:29:48.083622   43252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:29:48.105420   43252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:29:48.302023   43252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:29:48.510783   43252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:29:48.534546   43252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:29:48.559652   43252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:29:48.559766   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.574253   43252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 05:29:48.574328   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.591082   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.606886   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.623372   43252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:29:48.638697   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.655862   43252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.679259   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.758061   43252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:29:48.780499   43252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:29:48.815471   43252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:29:49.159917   43252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:29:49.559414   43252 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:29:49.559525   43252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:29:49.566263   43252 start.go:564] Will wait 60s for crictl version
	I1216 05:29:49.566357   43252 ssh_runner.go:195] Run: which crictl
	I1216 05:29:49.571120   43252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 05:29:49.604818   43252 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 05:29:49.604898   43252 ssh_runner.go:195] Run: crio --version
	I1216 05:29:49.635906   43252 ssh_runner.go:195] Run: crio --version
	I1216 05:29:49.669084   43252 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1216 05:29:49.673189   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:49.673618   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:49.673642   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:49.673856   43252 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 05:29:49.679360   43252 kubeadm.go:884] updating cluster {Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:29:49.679560   43252 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:29:49.679606   43252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:29:49.735910   43252 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:29:49.735934   43252 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:29:49.735994   43252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:29:49.784434   43252 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:29:49.784476   43252 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:29:49.784485   43252 kubeadm.go:935] updating node { 192.168.61.105 8443 v1.34.2 crio true true} ...
	I1216 05:29:49.784602   43252 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-928970 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:29:49.784690   43252 ssh_runner.go:195] Run: crio config
	I1216 05:29:49.863510   43252 cni.go:84] Creating CNI manager for ""
	I1216 05:29:49.863532   43252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 05:29:49.863546   43252 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:29:49.863564   43252 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-928970 NodeName:pause-928970 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:29:49.863722   43252 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-928970"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.105"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:29:49.863806   43252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:29:49.881746   43252 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:29:49.881890   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:29:49.903924   43252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1216 05:29:49.932868   43252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:29:49.977157   43252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1216 05:29:50.016467   43252 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1216 05:29:50.022725   43252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:29:50.302708   43252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:29:50.334523   43252 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970 for IP: 192.168.61.105
	I1216 05:29:50.334546   43252 certs.go:195] generating shared ca certs ...
	I1216 05:29:50.334586   43252 certs.go:227] acquiring lock for ca certs: {Name:mkeb038c86653b42975db55bc13142d606c3d109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:29:50.334800   43252 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key
	I1216 05:29:50.334867   43252 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key
	I1216 05:29:50.334883   43252 certs.go:257] generating profile certs ...
	I1216 05:29:50.334981   43252 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/client.key
	I1216 05:29:50.335074   43252 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/apiserver.key.d0987635
	I1216 05:29:50.335138   43252 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/proxy-client.key
	I1216 05:29:50.335292   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987.pem (1338 bytes)
	W1216 05:29:50.335339   43252 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987_empty.pem, impossibly tiny 0 bytes
	I1216 05:29:50.335354   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:29:50.335390   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:29:50.335438   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:29:50.335473   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem (1675 bytes)
	I1216 05:29:50.335541   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem (1708 bytes)
	I1216 05:29:50.336161   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:29:50.381760   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:29:50.444260   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:29:50.508081   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 05:29:50.544161   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:29:50.582491   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:29:50.622980   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:29:50.658191   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 05:29:50.699338   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem --> /usr/share/ca-certificates/89872.pem (1708 bytes)
	I1216 05:29:50.740697   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:29:50.777649   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987.pem --> /usr/share/ca-certificates/8987.pem (1338 bytes)
	I1216 05:29:50.834711   43252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:29:50.861392   43252 ssh_runner.go:195] Run: openssl version
	I1216 05:29:50.870750   43252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.884821   43252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89872.pem /etc/ssl/certs/89872.pem
	I1216 05:29:50.904724   43252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.912864   43252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:37 /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.912927   43252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.922144   43252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:29:50.935405   43252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.949802   43252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:29:50.963427   43252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.969638   43252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.969710   43252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.977853   43252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:29:50.990893   43252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.005818   43252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8987.pem /etc/ssl/certs/8987.pem
	I1216 05:29:51.025691   43252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.031718   43252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:37 /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.031835   43252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.040391   43252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:29:51.053099   43252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:29:51.059019   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:29:51.066854   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:29:51.074813   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:29:51.083852   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:29:51.091563   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:29:51.099164   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:29:51.107806   43252 kubeadm.go:401] StartCluster: {Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:29:51.107968   43252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:29:51.108053   43252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:29:51.150456   43252 cri.go:89] found id: "29bd28649adb8ca63f35f0b053bb0dd532c708c54a4ad724619c9b19b6e7150a"
	I1216 05:29:51.150481   43252 cri.go:89] found id: "37c942e93fe48b66e636792ab0c4a77e93ff849f2b8d640bf766dadb72f83226"
	I1216 05:29:51.150487   43252 cri.go:89] found id: "0ab96910b4685c9b0410dc42540b2e90762243ce3f6800ef8cff6557b3d871e5"
	I1216 05:29:51.150492   43252 cri.go:89] found id: "9f0d95736680c2ffbc4e899e42fffb5fd1ac65fc1b25940e63655787677f2080"
	I1216 05:29:51.150497   43252 cri.go:89] found id: "a95e9d2ccb008cb76b2ebe94260cafcbda0c65691f7771958ed0570c4afd2ef7"
	I1216 05:29:51.150501   43252 cri.go:89] found id: "539bd161320a69e88f0b2fcf03c491b266c439e6ead23d23286225fddab771d1"
	I1216 05:29:51.150506   43252 cri.go:89] found id: "f78fb917ba291b21273e76ea6d97d134329de65253f00fa34617225403819dc7"
	I1216 05:29:51.150510   43252 cri.go:89] found id: "f89d241b5e494c7bd2f78c1e860377bcffa70799bc629fe2b3b3142894e4900a"
	I1216 05:29:51.150516   43252 cri.go:89] found id: ""
	I1216 05:29:51.150574   43252 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-928970 -n pause-928970
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-928970 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-928970 logs -n 25: (1.525318385s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-764842 sudo systemctl status cri-docker --all --full --no-pager                                                                                   │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl cat cri-docker --no-pager                                                                                                   │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                              │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                        │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cri-dockerd --version                                                                                                                 │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl status containerd --all --full --no-pager                                                                                   │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl cat containerd --no-pager                                                                                                   │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cat /lib/systemd/system/containerd.service                                                                                            │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo containerd config dump                                                                                                                │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo crio config                                                                                                                           │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ delete  │ -p cilium-764842                                                                                                                                            │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │ 16 Dec 25 05:27 UTC │
	│ start   │ -p guest-312283 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-312283           │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │ 16 Dec 25 05:28 UTC │
	│ delete  │ -p cert-expiration-843108                                                                                                                                   │ cert-expiration-843108 │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:28 UTC │
	│ start   │ -p pause-928970 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-928970           │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:29 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-374609 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-374609 │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │                     │
	│ delete  │ -p stopped-upgrade-374609                                                                                                                                   │ stopped-upgrade-374609 │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:28 UTC │
	│ start   │ -p auto-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-764842            │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:30 UTC │
	│ start   │ -p kindnet-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-764842         │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:29 UTC │
	│ start   │ -p pause-928970 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-928970           │ jenkins │ v1.37.0 │ 16 Dec 25 05:29 UTC │ 16 Dec 25 05:30 UTC │
	│ ssh     │ -p auto-764842 pgrep -a kubelet                                                                                                                             │ auto-764842            │ jenkins │ v1.37.0 │ 16 Dec 25 05:30 UTC │ 16 Dec 25 05:30 UTC │
	│ ssh     │ -p kindnet-764842 pgrep -a kubelet                                                                                                                          │ kindnet-764842         │ jenkins │ v1.37.0 │ 16 Dec 25 05:30 UTC │ 16 Dec 25 05:30 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:29:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:29:40.514053   43252 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:29:40.514301   43252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:29:40.514310   43252 out.go:374] Setting ErrFile to fd 2...
	I1216 05:29:40.514314   43252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:29:40.514575   43252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:29:40.515061   43252 out.go:368] Setting JSON to false
	I1216 05:29:40.515962   43252 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4322,"bootTime":1765858658,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:29:40.516014   43252 start.go:143] virtualization: kvm guest
	I1216 05:29:40.518310   43252 out.go:179] * [pause-928970] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:29:40.519733   43252 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:29:40.519789   43252 notify.go:221] Checking for updates...
	I1216 05:29:40.522275   43252 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:29:40.523485   43252 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 05:29:40.524676   43252 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 05:29:40.526078   43252 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:29:40.527287   43252 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:29:40.528871   43252 config.go:182] Loaded profile config "pause-928970": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:29:40.529388   43252 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:29:40.569719   43252 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 05:29:40.570943   43252 start.go:309] selected driver: kvm2
	I1216 05:29:40.570960   43252 start.go:927] validating driver "kvm2" against &{Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:29:40.571100   43252 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:29:40.571987   43252 cni.go:84] Creating CNI manager for ""
	I1216 05:29:40.572069   43252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 05:29:40.572137   43252 start.go:353] cluster config:
	{Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-928970 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:29:40.572266   43252 iso.go:125] acquiring lock: {Name:mk32a15185e6e6998579c2a7c92376b162445713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:29:40.574293   43252 out.go:179] * Starting "pause-928970" primary control-plane node in "pause-928970" cluster
	I1216 05:29:40.575250   43252 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:29:40.575285   43252 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:29:40.575298   43252 cache.go:65] Caching tarball of preloaded images
	I1216 05:29:40.575404   43252 preload.go:238] Found /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:29:40.575421   43252 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:29:40.575553   43252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/config.json ...
	I1216 05:29:40.575803   43252 start.go:360] acquireMachinesLock for pause-928970: {Name:mk62c9c2852efe4dee40756b90f6ebee1eabe893 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 05:29:40.575859   43252 start.go:364] duration metric: took 33.309µs to acquireMachinesLock for "pause-928970"
	I1216 05:29:40.575879   43252 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:29:40.575889   43252 fix.go:54] fixHost starting: 
	I1216 05:29:40.577685   43252 fix.go:112] recreateIfNeeded on pause-928970: state=Running err=<nil>
	W1216 05:29:40.577714   43252 fix.go:138] unexpected machine state, will restart: <nil>
	W1216 05:29:39.890179   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	W1216 05:29:42.390381   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	I1216 05:29:40.680313   39475 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1216 05:29:40.680997   39475 api_server.go:269] stopped: https://192.168.39.159:8443/healthz: Get "https://192.168.39.159:8443/healthz": dial tcp 192.168.39.159:8443: connect: connection refused
	I1216 05:29:40.681047   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:40.681101   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:40.727461   39475 cri.go:89] found id: "2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:40.727478   39475 cri.go:89] found id: ""
	I1216 05:29:40.727487   39475 logs.go:282] 1 containers: [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8]
	I1216 05:29:40.727547   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.732591   39475 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:40.732675   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:40.778266   39475 cri.go:89] found id: "6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:40.778288   39475 cri.go:89] found id: ""
	I1216 05:29:40.778299   39475 logs.go:282] 1 containers: [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8]
	I1216 05:29:40.778364   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.783329   39475 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:40.783394   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:40.835509   39475 cri.go:89] found id: ""
	I1216 05:29:40.835536   39475 logs.go:282] 0 containers: []
	W1216 05:29:40.835663   39475 logs.go:284] No container was found matching "coredns"
	I1216 05:29:40.835806   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:40.836013   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:40.885026   39475 cri.go:89] found id: "e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:40.885069   39475 cri.go:89] found id: "38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:40.885077   39475 cri.go:89] found id: ""
	I1216 05:29:40.885088   39475 logs.go:282] 2 containers: [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278]
	I1216 05:29:40.885155   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.891551   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.897730   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:40.897856   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:40.945882   39475 cri.go:89] found id: ""
	I1216 05:29:40.945913   39475 logs.go:282] 0 containers: []
	W1216 05:29:40.945925   39475 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:40.945932   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:40.945997   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:40.991995   39475 cri.go:89] found id: "c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:40.992027   39475 cri.go:89] found id: ""
	I1216 05:29:40.992039   39475 logs.go:282] 1 containers: [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d]
	I1216 05:29:40.992108   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.996976   39475 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:40.997053   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:41.042034   39475 cri.go:89] found id: ""
	I1216 05:29:41.042073   39475 logs.go:282] 0 containers: []
	W1216 05:29:41.042087   39475 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:41.042095   39475 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:41.042167   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:41.082309   39475 cri.go:89] found id: ""
	I1216 05:29:41.082337   39475 logs.go:282] 0 containers: []
	W1216 05:29:41.082353   39475 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:41.082369   39475 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:41.082383   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:41.098096   39475 logs.go:123] Gathering logs for etcd [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8] ...
	I1216 05:29:41.098140   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:41.146456   39475 logs.go:123] Gathering logs for kube-scheduler [38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278] ...
	I1216 05:29:41.146490   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:41.192125   39475 logs.go:123] Gathering logs for kube-controller-manager [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d] ...
	I1216 05:29:41.192163   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:41.230482   39475 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:41.230512   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:41.494213   39475 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:41.494252   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:41.594216   39475 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:41.594272   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:41.676462   39475 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:41.676486   39475 logs.go:123] Gathering logs for kube-apiserver [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8] ...
	I1216 05:29:41.676498   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:41.723644   39475 logs.go:123] Gathering logs for kube-scheduler [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980] ...
	I1216 05:29:41.723676   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:41.798868   39475 logs.go:123] Gathering logs for container status ...
	I1216 05:29:41.798902   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:40.337252   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:40.838052   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:41.338341   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:41.837529   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:42.338053   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:42.837585   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:43.337984   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:43.432145   42564 kubeadm.go:1114] duration metric: took 3.349796574s to wait for elevateKubeSystemPrivileges
	I1216 05:29:43.432199   42564 kubeadm.go:403] duration metric: took 17.754339874s to StartCluster
	I1216 05:29:43.432226   42564 settings.go:142] acquiring lock: {Name:mk934ce4e0f52c59044080dacae6bea8d1937fab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:29:43.432325   42564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 05:29:43.434501   42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/kubeconfig: {Name:mk2e0aa2a9ecd47e0407b52e183f6fd294eb595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:29:43.434866   42564 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.83.174 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:29:43.434913   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 05:29:43.434925   42564 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:29:43.435124   42564 config.go:182] Loaded profile config "kindnet-764842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:29:43.435145   42564 addons.go:70] Setting storage-provisioner=true in profile "kindnet-764842"
	I1216 05:29:43.435175   42564 addons.go:239] Setting addon storage-provisioner=true in "kindnet-764842"
	I1216 05:29:43.435179   42564 addons.go:70] Setting default-storageclass=true in profile "kindnet-764842"
	I1216 05:29:43.435206   42564 host.go:66] Checking if "kindnet-764842" exists ...
	I1216 05:29:43.435206   42564 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-764842"
	I1216 05:29:43.436473   42564 out.go:179] * Verifying Kubernetes components...
	I1216 05:29:43.438015   42564 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:29:43.438027   42564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:29:43.438943   42564 addons.go:239] Setting addon default-storageclass=true in "kindnet-764842"
	I1216 05:29:43.438990   42564 host.go:66] Checking if "kindnet-764842" exists ...
	I1216 05:29:43.439134   42564 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:29:43.439167   42564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:29:43.440999   42564 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:29:43.441021   42564 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:29:43.442358   42564 main.go:143] libmachine: domain kindnet-764842 has defined MAC address 52:54:00:93:fa:85 in network mk-kindnet-764842
	I1216 05:29:43.442928   42564 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:fa:85", ip: ""} in network mk-kindnet-764842: {Iface:virbr5 ExpiryTime:2025-12-16 06:29:16 +0000 UTC Type:0 Mac:52:54:00:93:fa:85 Iaid: IPaddr:192.168.83.174 Prefix:24 Hostname:kindnet-764842 Clientid:01:52:54:00:93:fa:85}
	I1216 05:29:43.442956   42564 main.go:143] libmachine: domain kindnet-764842 has defined IP address 192.168.83.174 and MAC address 52:54:00:93:fa:85 in network mk-kindnet-764842
	I1216 05:29:43.443132   42564 sshutil.go:53] new ssh client: &{IP:192.168.83.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/kindnet-764842/id_rsa Username:docker}
	I1216 05:29:43.444056   42564 main.go:143] libmachine: domain kindnet-764842 has defined MAC address 52:54:00:93:fa:85 in network mk-kindnet-764842
	I1216 05:29:43.444455   42564 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:fa:85", ip: ""} in network mk-kindnet-764842: {Iface:virbr5 ExpiryTime:2025-12-16 06:29:16 +0000 UTC Type:0 Mac:52:54:00:93:fa:85 Iaid: IPaddr:192.168.83.174 Prefix:24 Hostname:kindnet-764842 Clientid:01:52:54:00:93:fa:85}
	I1216 05:29:43.444482   42564 main.go:143] libmachine: domain kindnet-764842 has defined IP address 192.168.83.174 and MAC address 52:54:00:93:fa:85 in network mk-kindnet-764842
	I1216 05:29:43.444653   42564 sshutil.go:53] new ssh client: &{IP:192.168.83.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/kindnet-764842/id_rsa Username:docker}
	I1216 05:29:43.742268   42564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:29:43.742281   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 05:29:43.823911   42564 node_ready.go:35] waiting up to 15m0s for node "kindnet-764842" to be "Ready" ...
	I1216 05:29:43.868426   42564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:29:43.872567   42564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:29:44.397967   42564 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1216 05:29:44.807057   42564 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 05:29:44.808414   42564 addons.go:530] duration metric: took 1.373481861s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 05:29:44.906855   42564 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-764842" context rescaled to 1 replicas
	I1216 05:29:40.579289   43252 out.go:252] * Updating the running kvm2 "pause-928970" VM ...
	I1216 05:29:40.579312   43252 machine.go:94] provisionDockerMachine start ...
	I1216 05:29:40.581965   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.582465   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:40.582500   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.582721   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:40.582924   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:40.582933   43252 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:29:40.707367   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-928970
	
	I1216 05:29:40.707396   43252 buildroot.go:166] provisioning hostname "pause-928970"
	I1216 05:29:40.711290   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.711905   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:40.711930   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.712101   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:40.712323   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:40.712336   43252 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-928970 && echo "pause-928970" | sudo tee /etc/hostname
	I1216 05:29:40.865287   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-928970
	
	I1216 05:29:40.869501   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.870023   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:40.870062   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.870287   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:40.870603   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:40.870640   43252 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-928970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-928970/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-928970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:29:41.005458   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:29:41.005500   43252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5059/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5059/.minikube}
	I1216 05:29:41.005567   43252 buildroot.go:174] setting up certificates
	I1216 05:29:41.005579   43252 provision.go:84] configureAuth start
	I1216 05:29:41.009160   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.009589   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.009633   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.012561   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.012963   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.012984   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.013148   43252 provision.go:143] copyHostCerts
	I1216 05:29:41.013205   43252 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem, removing ...
	I1216 05:29:41.013220   43252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem
	I1216 05:29:41.013290   43252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem (1082 bytes)
	I1216 05:29:41.013437   43252 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem, removing ...
	I1216 05:29:41.013450   43252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem
	I1216 05:29:41.013488   43252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem (1123 bytes)
	I1216 05:29:41.013570   43252 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem, removing ...
	I1216 05:29:41.013581   43252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem
	I1216 05:29:41.013611   43252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem (1675 bytes)
	I1216 05:29:41.013679   43252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem org=jenkins.pause-928970 san=[127.0.0.1 192.168.61.105 localhost minikube pause-928970]
	I1216 05:29:41.165551   43252 provision.go:177] copyRemoteCerts
	I1216 05:29:41.165617   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:29:41.168295   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.168654   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.168678   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.168841   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:41.264251   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:29:41.302576   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 05:29:41.337850   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:29:41.382261   43252 provision.go:87] duration metric: took 376.63928ms to configureAuth
	I1216 05:29:41.382297   43252 buildroot.go:189] setting minikube options for container-runtime
	I1216 05:29:41.382548   43252 config.go:182] Loaded profile config "pause-928970": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:29:41.386149   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.386674   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.386701   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.386990   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:41.387297   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:41.387321   43252 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1216 05:29:44.391577   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	W1216 05:29:46.393008   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	I1216 05:29:44.346630   39475 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1216 05:29:44.347287   39475 api_server.go:269] stopped: https://192.168.39.159:8443/healthz: Get "https://192.168.39.159:8443/healthz": dial tcp 192.168.39.159:8443: connect: connection refused
	I1216 05:29:44.347348   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:44.347408   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:44.393883   39475 cri.go:89] found id: "2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:44.393907   39475 cri.go:89] found id: ""
	I1216 05:29:44.393917   39475 logs.go:282] 1 containers: [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8]
	I1216 05:29:44.393984   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.399549   39475 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:44.399623   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:44.449881   39475 cri.go:89] found id: "6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:44.449903   39475 cri.go:89] found id: ""
	I1216 05:29:44.449911   39475 logs.go:282] 1 containers: [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8]
	I1216 05:29:44.449963   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.455071   39475 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:44.455169   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:44.508690   39475 cri.go:89] found id: ""
	I1216 05:29:44.508728   39475 logs.go:282] 0 containers: []
	W1216 05:29:44.508742   39475 logs.go:284] No container was found matching "coredns"
	I1216 05:29:44.508751   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:44.508836   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:44.566059   39475 cri.go:89] found id: "e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:44.566086   39475 cri.go:89] found id: "38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:44.566091   39475 cri.go:89] found id: ""
	I1216 05:29:44.566098   39475 logs.go:282] 2 containers: [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278]
	I1216 05:29:44.566161   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.572236   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.576935   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:44.577023   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:44.617243   39475 cri.go:89] found id: ""
	I1216 05:29:44.617279   39475 logs.go:282] 0 containers: []
	W1216 05:29:44.617292   39475 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:44.617300   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:44.617375   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:44.661398   39475 cri.go:89] found id: "c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:44.661428   39475 cri.go:89] found id: ""
	I1216 05:29:44.661440   39475 logs.go:282] 1 containers: [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d]
	I1216 05:29:44.661517   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.666474   39475 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:44.666581   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:44.706961   39475 cri.go:89] found id: ""
	I1216 05:29:44.706990   39475 logs.go:282] 0 containers: []
	W1216 05:29:44.707001   39475 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:44.707008   39475 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:44.707075   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:44.752086   39475 cri.go:89] found id: ""
	I1216 05:29:44.752118   39475 logs.go:282] 0 containers: []
	W1216 05:29:44.752129   39475 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:44.752143   39475 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:44.752157   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:44.772382   39475 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:44.772410   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:44.862005   39475 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:44.862028   39475 logs.go:123] Gathering logs for kube-apiserver [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8] ...
	I1216 05:29:44.862044   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:44.908724   39475 logs.go:123] Gathering logs for etcd [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8] ...
	I1216 05:29:44.908758   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:44.959914   39475 logs.go:123] Gathering logs for kube-scheduler [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980] ...
	I1216 05:29:44.959944   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:45.032030   39475 logs.go:123] Gathering logs for kube-controller-manager [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d] ...
	I1216 05:29:45.032070   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:45.071220   39475 logs.go:123] Gathering logs for container status ...
	I1216 05:29:45.071247   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:45.114037   39475 logs.go:123] Gathering logs for kube-scheduler [38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278] ...
	I1216 05:29:45.114065   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:45.154427   39475 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:45.154457   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:45.416016   39475 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:45.416059   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:48.023537   39475 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1216 05:29:48.024210   39475 api_server.go:269] stopped: https://192.168.39.159:8443/healthz: Get "https://192.168.39.159:8443/healthz": dial tcp 192.168.39.159:8443: connect: connection refused
	I1216 05:29:48.024259   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:48.024320   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:48.070331   39475 cri.go:89] found id: "2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:48.070357   39475 cri.go:89] found id: ""
	I1216 05:29:48.070366   39475 logs.go:282] 1 containers: [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8]
	I1216 05:29:48.070433   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.076302   39475 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:48.076369   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:48.126051   39475 cri.go:89] found id: "6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:48.126084   39475 cri.go:89] found id: ""
	I1216 05:29:48.126097   39475 logs.go:282] 1 containers: [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8]
	I1216 05:29:48.126175   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.130863   39475 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:48.130948   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:48.175643   39475 cri.go:89] found id: ""
	I1216 05:29:48.175679   39475 logs.go:282] 0 containers: []
	W1216 05:29:48.175691   39475 logs.go:284] No container was found matching "coredns"
	I1216 05:29:48.175700   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:48.175784   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:48.220234   39475 cri.go:89] found id: "e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:48.220265   39475 cri.go:89] found id: "38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:48.220273   39475 cri.go:89] found id: ""
	I1216 05:29:48.220283   39475 logs.go:282] 2 containers: [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278]
	I1216 05:29:48.220353   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.226419   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.230634   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:48.230707   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:48.279166   39475 cri.go:89] found id: ""
	I1216 05:29:48.279200   39475 logs.go:282] 0 containers: []
	W1216 05:29:48.279215   39475 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:48.279224   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:48.279306   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:48.324590   39475 cri.go:89] found id: "c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:48.324616   39475 cri.go:89] found id: ""
	I1216 05:29:48.324626   39475 logs.go:282] 1 containers: [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d]
	I1216 05:29:48.324700   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.329509   39475 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:48.329581   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:48.366574   39475 cri.go:89] found id: ""
	I1216 05:29:48.366611   39475 logs.go:282] 0 containers: []
	W1216 05:29:48.366623   39475 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:48.366631   39475 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:48.366701   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:48.411688   39475 cri.go:89] found id: ""
	I1216 05:29:48.411718   39475 logs.go:282] 0 containers: []
	W1216 05:29:48.411797   39475 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:48.411823   39475 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:48.411841   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:47.026203   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:29:47.026235   43252 machine.go:97] duration metric: took 6.446915161s to provisionDockerMachine
	I1216 05:29:47.026248   43252 start.go:293] postStartSetup for "pause-928970" (driver="kvm2")
	I1216 05:29:47.026258   43252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:29:47.026336   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:29:47.029231   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.029660   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.029684   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.029845   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:47.122482   43252 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:29:47.128221   43252 info.go:137] Remote host: Buildroot 2025.02
	I1216 05:29:47.128258   43252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/addons for local assets ...
	I1216 05:29:47.128335   43252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/files for local assets ...
	I1216 05:29:47.128465   43252 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem -> 89872.pem in /etc/ssl/certs
	I1216 05:29:47.128587   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:29:47.141501   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem --> /etc/ssl/certs/89872.pem (1708 bytes)
	I1216 05:29:47.172632   43252 start.go:296] duration metric: took 146.369932ms for postStartSetup
	I1216 05:29:47.172681   43252 fix.go:56] duration metric: took 6.596792796s for fixHost
	I1216 05:29:47.175870   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.176322   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.176359   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.176612   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:47.176893   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:47.176906   43252 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 05:29:47.299899   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765862987.294942981
	
	I1216 05:29:47.299926   43252 fix.go:216] guest clock: 1765862987.294942981
	I1216 05:29:47.299934   43252 fix.go:229] Guest: 2025-12-16 05:29:47.294942981 +0000 UTC Remote: 2025-12-16 05:29:47.172688027 +0000 UTC m=+6.714876738 (delta=122.254954ms)
	I1216 05:29:47.299950   43252 fix.go:200] guest clock delta is within tolerance: 122.254954ms
	I1216 05:29:47.299979   43252 start.go:83] releasing machines lock for "pause-928970", held for 6.724085047s
	I1216 05:29:47.303217   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.303706   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.303733   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.304254   43252 ssh_runner.go:195] Run: cat /version.json
	I1216 05:29:47.304373   43252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:29:47.307385   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.307620   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.307874   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.307903   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.308076   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.308080   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:47.308117   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.308296   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:47.396548   43252 ssh_runner.go:195] Run: systemctl --version
	I1216 05:29:47.423102   43252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:29:47.578195   43252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:29:47.588351   43252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:29:47.588429   43252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:29:47.600145   43252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:29:47.600180   43252 start.go:496] detecting cgroup driver to use...
	I1216 05:29:47.600256   43252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:29:47.622406   43252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:29:47.640566   43252 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:29:47.640626   43252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:29:47.662194   43252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:29:47.681343   43252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:29:47.868246   43252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:29:48.047906   43252 docker.go:234] disabling docker service ...
	I1216 05:29:48.047988   43252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:29:48.083622   43252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:29:48.105420   43252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:29:48.302023   43252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:29:48.510783   43252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:29:48.534546   43252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:29:48.559652   43252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:29:48.559766   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.574253   43252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 05:29:48.574328   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.591082   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.606886   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.623372   43252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:29:48.638697   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.655862   43252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.679259   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.758061   43252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:29:48.780499   43252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:29:48.815471   43252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:29:49.159917   43252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:29:49.559414   43252 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:29:49.559525   43252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:29:49.566263   43252 start.go:564] Will wait 60s for crictl version
	I1216 05:29:49.566357   43252 ssh_runner.go:195] Run: which crictl
	I1216 05:29:49.571120   43252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 05:29:49.604818   43252 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 05:29:49.604898   43252 ssh_runner.go:195] Run: crio --version
	I1216 05:29:49.635906   43252 ssh_runner.go:195] Run: crio --version
	I1216 05:29:49.669084   43252 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	W1216 05:29:45.828024   42564 node_ready.go:57] node "kindnet-764842" has "Ready":"False" status (will retry)
	W1216 05:29:48.329235   42564 node_ready.go:57] node "kindnet-764842" has "Ready":"False" status (will retry)
	I1216 05:29:49.673189   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:49.673618   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:49.673642   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:49.673856   43252 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 05:29:49.679360   43252 kubeadm.go:884] updating cluster {Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:29:49.679560   43252 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:29:49.679606   43252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:29:49.735910   43252 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:29:49.735934   43252 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:29:49.735994   43252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:29:49.784434   43252 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:29:49.784476   43252 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:29:49.784485   43252 kubeadm.go:935] updating node { 192.168.61.105 8443 v1.34.2 crio true true} ...
	I1216 05:29:49.784602   43252 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-928970 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:29:49.784690   43252 ssh_runner.go:195] Run: crio config
	I1216 05:29:49.863510   43252 cni.go:84] Creating CNI manager for ""
	I1216 05:29:49.863532   43252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 05:29:49.863546   43252 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:29:49.863564   43252 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-928970 NodeName:pause-928970 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:29:49.863722   43252 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-928970"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.105"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:29:49.863806   43252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:29:49.881746   43252 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:29:49.881890   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:29:49.903924   43252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1216 05:29:49.932868   43252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:29:49.977157   43252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1216 05:29:50.016467   43252 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1216 05:29:50.022725   43252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:29:50.302708   43252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:29:50.334523   43252 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970 for IP: 192.168.61.105
	I1216 05:29:50.334546   43252 certs.go:195] generating shared ca certs ...
	I1216 05:29:50.334586   43252 certs.go:227] acquiring lock for ca certs: {Name:mkeb038c86653b42975db55bc13142d606c3d109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:29:50.334800   43252 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key
	I1216 05:29:50.334867   43252 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key
	I1216 05:29:50.334883   43252 certs.go:257] generating profile certs ...
	I1216 05:29:50.334981   43252 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/client.key
	I1216 05:29:50.335074   43252 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/apiserver.key.d0987635
	I1216 05:29:50.335138   43252 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/proxy-client.key
	I1216 05:29:50.335292   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987.pem (1338 bytes)
	W1216 05:29:50.335339   43252 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987_empty.pem, impossibly tiny 0 bytes
	I1216 05:29:50.335354   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:29:50.335390   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:29:50.335438   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:29:50.335473   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem (1675 bytes)
	I1216 05:29:50.335541   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem (1708 bytes)
	I1216 05:29:50.336161   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:29:50.381760   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:29:50.444260   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:29:50.508081   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	W1216 05:29:48.893900   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	W1216 05:29:51.390932   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	I1216 05:29:48.695793   39475 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:48.695847   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:48.804256   39475 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:48.804331   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:48.820540   39475 logs.go:123] Gathering logs for kube-apiserver [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8] ...
	I1216 05:29:48.820577   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:48.866600   39475 logs.go:123] Gathering logs for kube-scheduler [38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278] ...
	I1216 05:29:48.866636   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:48.918637   39475 logs.go:123] Gathering logs for container status ...
	I1216 05:29:48.918674   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:48.970382   39475 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:48.970425   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:49.061437   39475 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:49.061465   39475 logs.go:123] Gathering logs for etcd [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8] ...
	I1216 05:29:49.061479   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:49.109236   39475 logs.go:123] Gathering logs for kube-scheduler [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980] ...
	I1216 05:29:49.109269   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:49.193120   39475 logs.go:123] Gathering logs for kube-controller-manager [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d] ...
	I1216 05:29:49.193158   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:51.735891   39475 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1216 05:29:51.736612   39475 api_server.go:269] stopped: https://192.168.39.159:8443/healthz: Get "https://192.168.39.159:8443/healthz": dial tcp 192.168.39.159:8443: connect: connection refused
	I1216 05:29:51.736681   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:51.736743   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:51.791765   39475 cri.go:89] found id: "2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:51.791813   39475 cri.go:89] found id: ""
	I1216 05:29:51.791823   39475 logs.go:282] 1 containers: [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8]
	I1216 05:29:51.791894   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:51.796455   39475 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:51.796566   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:51.845469   39475 cri.go:89] found id: "6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:51.845496   39475 cri.go:89] found id: ""
	I1216 05:29:51.845508   39475 logs.go:282] 1 containers: [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8]
	I1216 05:29:51.845575   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:51.850314   39475 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:51.850401   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:51.897824   39475 cri.go:89] found id: ""
	I1216 05:29:51.897856   39475 logs.go:282] 0 containers: []
	W1216 05:29:51.897870   39475 logs.go:284] No container was found matching "coredns"
	I1216 05:29:51.897878   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:51.897940   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:51.945193   39475 cri.go:89] found id: "e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:51.945229   39475 cri.go:89] found id: "38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:51.945237   39475 cri.go:89] found id: ""
	I1216 05:29:51.945248   39475 logs.go:282] 2 containers: [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278]
	I1216 05:29:51.945320   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:51.950461   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:51.955149   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:51.955225   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:52.004855   39475 cri.go:89] found id: ""
	I1216 05:29:52.004893   39475 logs.go:282] 0 containers: []
	W1216 05:29:52.004902   39475 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:52.004908   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:52.004972   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:52.048232   39475 cri.go:89] found id: "c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:52.048257   39475 cri.go:89] found id: ""
	I1216 05:29:52.048267   39475 logs.go:282] 1 containers: [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d]
	I1216 05:29:52.048337   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:52.053256   39475 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:52.053335   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:52.091221   39475 cri.go:89] found id: ""
	I1216 05:29:52.091254   39475 logs.go:282] 0 containers: []
	W1216 05:29:52.091263   39475 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:52.091268   39475 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:52.091328   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:52.132637   39475 cri.go:89] found id: ""
	I1216 05:29:52.132667   39475 logs.go:282] 0 containers: []
	W1216 05:29:52.132678   39475 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:52.132694   39475 logs.go:123] Gathering logs for container status ...
	I1216 05:29:52.132705   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:52.184480   39475 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:52.184509   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:52.290581   39475 logs.go:123] Gathering logs for etcd [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8] ...
	I1216 05:29:52.290620   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:52.333635   39475 logs.go:123] Gathering logs for kube-scheduler [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980] ...
	I1216 05:29:52.333665   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:52.416325   39475 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:52.416363   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:52.434323   39475 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:52.434351   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:52.509084   39475 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:52.509109   39475 logs.go:123] Gathering logs for kube-apiserver [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8] ...
	I1216 05:29:52.509124   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:52.547487   39475 logs.go:123] Gathering logs for kube-scheduler [38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278] ...
	I1216 05:29:52.547519   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:52.585005   39475 logs.go:123] Gathering logs for kube-controller-manager [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d] ...
	I1216 05:29:52.585041   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:52.621193   39475 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:52.621221   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1216 05:29:50.827802   42564 node_ready.go:57] node "kindnet-764842" has "Ready":"False" status (will retry)
	W1216 05:29:53.327854   42564 node_ready.go:57] node "kindnet-764842" has "Ready":"False" status (will retry)
	I1216 05:29:50.544161   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:29:50.582491   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:29:50.622980   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:29:50.658191   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 05:29:50.699338   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem --> /usr/share/ca-certificates/89872.pem (1708 bytes)
	I1216 05:29:50.740697   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:29:50.777649   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987.pem --> /usr/share/ca-certificates/8987.pem (1338 bytes)
	I1216 05:29:50.834711   43252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:29:50.861392   43252 ssh_runner.go:195] Run: openssl version
	I1216 05:29:50.870750   43252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.884821   43252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89872.pem /etc/ssl/certs/89872.pem
	I1216 05:29:50.904724   43252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.912864   43252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:37 /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.912927   43252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.922144   43252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:29:50.935405   43252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.949802   43252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:29:50.963427   43252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.969638   43252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.969710   43252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.977853   43252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:29:50.990893   43252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.005818   43252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8987.pem /etc/ssl/certs/8987.pem
	I1216 05:29:51.025691   43252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.031718   43252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:37 /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.031835   43252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.040391   43252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:29:51.053099   43252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:29:51.059019   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:29:51.066854   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:29:51.074813   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:29:51.083852   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:29:51.091563   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:29:51.099164   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:29:51.107806   43252 kubeadm.go:401] StartCluster: {Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:29:51.107968   43252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:29:51.108053   43252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:29:51.150456   43252 cri.go:89] found id: "29bd28649adb8ca63f35f0b053bb0dd532c708c54a4ad724619c9b19b6e7150a"
	I1216 05:29:51.150481   43252 cri.go:89] found id: "37c942e93fe48b66e636792ab0c4a77e93ff849f2b8d640bf766dadb72f83226"
	I1216 05:29:51.150487   43252 cri.go:89] found id: "0ab96910b4685c9b0410dc42540b2e90762243ce3f6800ef8cff6557b3d871e5"
	I1216 05:29:51.150492   43252 cri.go:89] found id: "9f0d95736680c2ffbc4e899e42fffb5fd1ac65fc1b25940e63655787677f2080"
	I1216 05:29:51.150497   43252 cri.go:89] found id: "a95e9d2ccb008cb76b2ebe94260cafcbda0c65691f7771958ed0570c4afd2ef7"
	I1216 05:29:51.150501   43252 cri.go:89] found id: "539bd161320a69e88f0b2fcf03c491b266c439e6ead23d23286225fddab771d1"
	I1216 05:29:51.150506   43252 cri.go:89] found id: "f78fb917ba291b21273e76ea6d97d134329de65253f00fa34617225403819dc7"
	I1216 05:29:51.150510   43252 cri.go:89] found id: "f89d241b5e494c7bd2f78c1e860377bcffa70799bc629fe2b3b3142894e4900a"
	I1216 05:29:51.150516   43252 cri.go:89] found id: ""
	I1216 05:29:51.150574   43252 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-928970 -n pause-928970
helpers_test.go:270: (dbg) Run:  kubectl --context pause-928970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-928970 -n pause-928970
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-928970 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-928970 logs -n 25: (1.444900884s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-764842 sudo systemctl status cri-docker --all --full --no-pager                                                                                   │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl cat cri-docker --no-pager                                                                                                   │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                              │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                        │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cri-dockerd --version                                                                                                                 │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl status containerd --all --full --no-pager                                                                                   │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl cat containerd --no-pager                                                                                                   │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cat /lib/systemd/system/containerd.service                                                                                            │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo containerd config dump                                                                                                                │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ ssh     │ -p cilium-764842 sudo crio config                                                                                                                           │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │                     │
	│ delete  │ -p cilium-764842                                                                                                                                            │ cilium-764842          │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │ 16 Dec 25 05:27 UTC │
	│ start   │ -p guest-312283 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-312283           │ jenkins │ v1.37.0 │ 16 Dec 25 05:27 UTC │ 16 Dec 25 05:28 UTC │
	│ delete  │ -p cert-expiration-843108                                                                                                                                   │ cert-expiration-843108 │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:28 UTC │
	│ start   │ -p pause-928970 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-928970           │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:29 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-374609 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-374609 │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │                     │
	│ delete  │ -p stopped-upgrade-374609                                                                                                                                   │ stopped-upgrade-374609 │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:28 UTC │
	│ start   │ -p auto-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-764842            │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:30 UTC │
	│ start   │ -p kindnet-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-764842         │ jenkins │ v1.37.0 │ 16 Dec 25 05:28 UTC │ 16 Dec 25 05:29 UTC │
	│ start   │ -p pause-928970 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-928970           │ jenkins │ v1.37.0 │ 16 Dec 25 05:29 UTC │ 16 Dec 25 05:30 UTC │
	│ ssh     │ -p auto-764842 pgrep -a kubelet                                                                                                                             │ auto-764842            │ jenkins │ v1.37.0 │ 16 Dec 25 05:30 UTC │ 16 Dec 25 05:30 UTC │
	│ ssh     │ -p kindnet-764842 pgrep -a kubelet                                                                                                                          │ kindnet-764842         │ jenkins │ v1.37.0 │ 16 Dec 25 05:30 UTC │ 16 Dec 25 05:30 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 05:29:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 05:29:40.514053   43252 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:29:40.514301   43252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:29:40.514310   43252 out.go:374] Setting ErrFile to fd 2...
	I1216 05:29:40.514314   43252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:29:40.514575   43252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:29:40.515061   43252 out.go:368] Setting JSON to false
	I1216 05:29:40.515962   43252 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4322,"bootTime":1765858658,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:29:40.516014   43252 start.go:143] virtualization: kvm guest
	I1216 05:29:40.518310   43252 out.go:179] * [pause-928970] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:29:40.519733   43252 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:29:40.519789   43252 notify.go:221] Checking for updates...
	I1216 05:29:40.522275   43252 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:29:40.523485   43252 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 05:29:40.524676   43252 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 05:29:40.526078   43252 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:29:40.527287   43252 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:29:40.528871   43252 config.go:182] Loaded profile config "pause-928970": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:29:40.529388   43252 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:29:40.569719   43252 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 05:29:40.570943   43252 start.go:309] selected driver: kvm2
	I1216 05:29:40.570960   43252 start.go:927] validating driver "kvm2" against &{Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:29:40.571100   43252 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:29:40.571987   43252 cni.go:84] Creating CNI manager for ""
	I1216 05:29:40.572069   43252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 05:29:40.572137   43252 start.go:353] cluster config:
	{Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-928970 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:29:40.572266   43252 iso.go:125] acquiring lock: {Name:mk32a15185e6e6998579c2a7c92376b162445713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 05:29:40.574293   43252 out.go:179] * Starting "pause-928970" primary control-plane node in "pause-928970" cluster
	I1216 05:29:40.575250   43252 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:29:40.575285   43252 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 05:29:40.575298   43252 cache.go:65] Caching tarball of preloaded images
	I1216 05:29:40.575404   43252 preload.go:238] Found /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 05:29:40.575421   43252 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 05:29:40.575553   43252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/config.json ...
	I1216 05:29:40.575803   43252 start.go:360] acquireMachinesLock for pause-928970: {Name:mk62c9c2852efe4dee40756b90f6ebee1eabe893 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 05:29:40.575859   43252 start.go:364] duration metric: took 33.309µs to acquireMachinesLock for "pause-928970"
	I1216 05:29:40.575879   43252 start.go:96] Skipping create...Using existing machine configuration
	I1216 05:29:40.575889   43252 fix.go:54] fixHost starting: 
	I1216 05:29:40.577685   43252 fix.go:112] recreateIfNeeded on pause-928970: state=Running err=<nil>
	W1216 05:29:40.577714   43252 fix.go:138] unexpected machine state, will restart: <nil>
	W1216 05:29:39.890179   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	W1216 05:29:42.390381   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	I1216 05:29:40.680313   39475 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1216 05:29:40.680997   39475 api_server.go:269] stopped: https://192.168.39.159:8443/healthz: Get "https://192.168.39.159:8443/healthz": dial tcp 192.168.39.159:8443: connect: connection refused
	I1216 05:29:40.681047   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:40.681101   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:40.727461   39475 cri.go:89] found id: "2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:40.727478   39475 cri.go:89] found id: ""
	I1216 05:29:40.727487   39475 logs.go:282] 1 containers: [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8]
	I1216 05:29:40.727547   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.732591   39475 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:40.732675   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:40.778266   39475 cri.go:89] found id: "6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:40.778288   39475 cri.go:89] found id: ""
	I1216 05:29:40.778299   39475 logs.go:282] 1 containers: [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8]
	I1216 05:29:40.778364   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.783329   39475 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:40.783394   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:40.835509   39475 cri.go:89] found id: ""
	I1216 05:29:40.835536   39475 logs.go:282] 0 containers: []
	W1216 05:29:40.835663   39475 logs.go:284] No container was found matching "coredns"
	I1216 05:29:40.835806   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:40.836013   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:40.885026   39475 cri.go:89] found id: "e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:40.885069   39475 cri.go:89] found id: "38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:40.885077   39475 cri.go:89] found id: ""
	I1216 05:29:40.885088   39475 logs.go:282] 2 containers: [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278]
	I1216 05:29:40.885155   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.891551   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.897730   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:40.897856   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:40.945882   39475 cri.go:89] found id: ""
	I1216 05:29:40.945913   39475 logs.go:282] 0 containers: []
	W1216 05:29:40.945925   39475 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:40.945932   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:40.945997   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:40.991995   39475 cri.go:89] found id: "c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:40.992027   39475 cri.go:89] found id: ""
	I1216 05:29:40.992039   39475 logs.go:282] 1 containers: [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d]
	I1216 05:29:40.992108   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:40.996976   39475 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:40.997053   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:41.042034   39475 cri.go:89] found id: ""
	I1216 05:29:41.042073   39475 logs.go:282] 0 containers: []
	W1216 05:29:41.042087   39475 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:41.042095   39475 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:41.042167   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:41.082309   39475 cri.go:89] found id: ""
	I1216 05:29:41.082337   39475 logs.go:282] 0 containers: []
	W1216 05:29:41.082353   39475 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:41.082369   39475 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:41.082383   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:41.098096   39475 logs.go:123] Gathering logs for etcd [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8] ...
	I1216 05:29:41.098140   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:41.146456   39475 logs.go:123] Gathering logs for kube-scheduler [38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278] ...
	I1216 05:29:41.146490   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:41.192125   39475 logs.go:123] Gathering logs for kube-controller-manager [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d] ...
	I1216 05:29:41.192163   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:41.230482   39475 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:41.230512   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:41.494213   39475 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:41.494252   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:41.594216   39475 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:41.594272   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:41.676462   39475 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:41.676486   39475 logs.go:123] Gathering logs for kube-apiserver [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8] ...
	I1216 05:29:41.676498   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:41.723644   39475 logs.go:123] Gathering logs for kube-scheduler [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980] ...
	I1216 05:29:41.723676   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:41.798868   39475 logs.go:123] Gathering logs for container status ...
	I1216 05:29:41.798902   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:40.337252   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:40.838052   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:41.338341   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:41.837529   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:42.338053   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:42.837585   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:43.337984   42564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 05:29:43.432145   42564 kubeadm.go:1114] duration metric: took 3.349796574s to wait for elevateKubeSystemPrivileges
	I1216 05:29:43.432199   42564 kubeadm.go:403] duration metric: took 17.754339874s to StartCluster
	I1216 05:29:43.432226   42564 settings.go:142] acquiring lock: {Name:mk934ce4e0f52c59044080dacae6bea8d1937fab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:29:43.432325   42564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 05:29:43.434501   42564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/kubeconfig: {Name:mk2e0aa2a9ecd47e0407b52e183f6fd294eb595a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:29:43.434866   42564 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.83.174 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 05:29:43.434913   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 05:29:43.434925   42564 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 05:29:43.435124   42564 config.go:182] Loaded profile config "kindnet-764842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:29:43.435145   42564 addons.go:70] Setting storage-provisioner=true in profile "kindnet-764842"
	I1216 05:29:43.435175   42564 addons.go:239] Setting addon storage-provisioner=true in "kindnet-764842"
	I1216 05:29:43.435179   42564 addons.go:70] Setting default-storageclass=true in profile "kindnet-764842"
	I1216 05:29:43.435206   42564 host.go:66] Checking if "kindnet-764842" exists ...
	I1216 05:29:43.435206   42564 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-764842"
	I1216 05:29:43.436473   42564 out.go:179] * Verifying Kubernetes components...
	I1216 05:29:43.438015   42564 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 05:29:43.438027   42564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:29:43.438943   42564 addons.go:239] Setting addon default-storageclass=true in "kindnet-764842"
	I1216 05:29:43.438990   42564 host.go:66] Checking if "kindnet-764842" exists ...
	I1216 05:29:43.439134   42564 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:29:43.439167   42564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 05:29:43.440999   42564 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 05:29:43.441021   42564 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 05:29:43.442358   42564 main.go:143] libmachine: domain kindnet-764842 has defined MAC address 52:54:00:93:fa:85 in network mk-kindnet-764842
	I1216 05:29:43.442928   42564 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:fa:85", ip: ""} in network mk-kindnet-764842: {Iface:virbr5 ExpiryTime:2025-12-16 06:29:16 +0000 UTC Type:0 Mac:52:54:00:93:fa:85 Iaid: IPaddr:192.168.83.174 Prefix:24 Hostname:kindnet-764842 Clientid:01:52:54:00:93:fa:85}
	I1216 05:29:43.442956   42564 main.go:143] libmachine: domain kindnet-764842 has defined IP address 192.168.83.174 and MAC address 52:54:00:93:fa:85 in network mk-kindnet-764842
	I1216 05:29:43.443132   42564 sshutil.go:53] new ssh client: &{IP:192.168.83.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/kindnet-764842/id_rsa Username:docker}
	I1216 05:29:43.444056   42564 main.go:143] libmachine: domain kindnet-764842 has defined MAC address 52:54:00:93:fa:85 in network mk-kindnet-764842
	I1216 05:29:43.444455   42564 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:93:fa:85", ip: ""} in network mk-kindnet-764842: {Iface:virbr5 ExpiryTime:2025-12-16 06:29:16 +0000 UTC Type:0 Mac:52:54:00:93:fa:85 Iaid: IPaddr:192.168.83.174 Prefix:24 Hostname:kindnet-764842 Clientid:01:52:54:00:93:fa:85}
	I1216 05:29:43.444482   42564 main.go:143] libmachine: domain kindnet-764842 has defined IP address 192.168.83.174 and MAC address 52:54:00:93:fa:85 in network mk-kindnet-764842
	I1216 05:29:43.444653   42564 sshutil.go:53] new ssh client: &{IP:192.168.83.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/kindnet-764842/id_rsa Username:docker}
	I1216 05:29:43.742268   42564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:29:43.742281   42564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 05:29:43.823911   42564 node_ready.go:35] waiting up to 15m0s for node "kindnet-764842" to be "Ready" ...
	I1216 05:29:43.868426   42564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 05:29:43.872567   42564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 05:29:44.397967   42564 start.go:977] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1216 05:29:44.807057   42564 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 05:29:44.808414   42564 addons.go:530] duration metric: took 1.373481861s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 05:29:44.906855   42564 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-764842" context rescaled to 1 replicas
	I1216 05:29:40.579289   43252 out.go:252] * Updating the running kvm2 "pause-928970" VM ...
	I1216 05:29:40.579312   43252 machine.go:94] provisionDockerMachine start ...
	I1216 05:29:40.581965   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.582465   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:40.582500   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.582721   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:40.582924   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:40.582933   43252 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 05:29:40.707367   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-928970
	
	I1216 05:29:40.707396   43252 buildroot.go:166] provisioning hostname "pause-928970"
	I1216 05:29:40.711290   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.711905   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:40.711930   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.712101   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:40.712323   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:40.712336   43252 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-928970 && echo "pause-928970" | sudo tee /etc/hostname
	I1216 05:29:40.865287   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-928970
	
	I1216 05:29:40.869501   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.870023   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:40.870062   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:40.870287   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:40.870603   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:40.870640   43252 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-928970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-928970/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-928970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 05:29:41.005458   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 05:29:41.005500   43252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22141-5059/.minikube CaCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22141-5059/.minikube}
	I1216 05:29:41.005567   43252 buildroot.go:174] setting up certificates
	I1216 05:29:41.005579   43252 provision.go:84] configureAuth start
	I1216 05:29:41.009160   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.009589   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.009633   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.012561   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.012963   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.012984   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.013148   43252 provision.go:143] copyHostCerts
	I1216 05:29:41.013205   43252 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem, removing ...
	I1216 05:29:41.013220   43252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem
	I1216 05:29:41.013290   43252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/ca.pem (1082 bytes)
	I1216 05:29:41.013437   43252 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem, removing ...
	I1216 05:29:41.013450   43252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem
	I1216 05:29:41.013488   43252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/cert.pem (1123 bytes)
	I1216 05:29:41.013570   43252 exec_runner.go:144] found /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem, removing ...
	I1216 05:29:41.013581   43252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem
	I1216 05:29:41.013611   43252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22141-5059/.minikube/key.pem (1675 bytes)
	I1216 05:29:41.013679   43252 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem org=jenkins.pause-928970 san=[127.0.0.1 192.168.61.105 localhost minikube pause-928970]
	I1216 05:29:41.165551   43252 provision.go:177] copyRemoteCerts
	I1216 05:29:41.165617   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 05:29:41.168295   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.168654   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.168678   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.168841   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:41.264251   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 05:29:41.302576   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 05:29:41.337850   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 05:29:41.382261   43252 provision.go:87] duration metric: took 376.63928ms to configureAuth
	I1216 05:29:41.382297   43252 buildroot.go:189] setting minikube options for container-runtime
	I1216 05:29:41.382548   43252 config.go:182] Loaded profile config "pause-928970": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:29:41.386149   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.386674   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:41.386701   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:41.386990   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:41.387297   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:41.387321   43252 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1216 05:29:44.391577   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	W1216 05:29:46.393008   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	I1216 05:29:44.346630   39475 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1216 05:29:44.347287   39475 api_server.go:269] stopped: https://192.168.39.159:8443/healthz: Get "https://192.168.39.159:8443/healthz": dial tcp 192.168.39.159:8443: connect: connection refused
	I1216 05:29:44.347348   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:44.347408   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:44.393883   39475 cri.go:89] found id: "2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:44.393907   39475 cri.go:89] found id: ""
	I1216 05:29:44.393917   39475 logs.go:282] 1 containers: [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8]
	I1216 05:29:44.393984   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.399549   39475 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:44.399623   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:44.449881   39475 cri.go:89] found id: "6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:44.449903   39475 cri.go:89] found id: ""
	I1216 05:29:44.449911   39475 logs.go:282] 1 containers: [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8]
	I1216 05:29:44.449963   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.455071   39475 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:44.455169   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:44.508690   39475 cri.go:89] found id: ""
	I1216 05:29:44.508728   39475 logs.go:282] 0 containers: []
	W1216 05:29:44.508742   39475 logs.go:284] No container was found matching "coredns"
	I1216 05:29:44.508751   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:44.508836   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:44.566059   39475 cri.go:89] found id: "e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:44.566086   39475 cri.go:89] found id: "38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:44.566091   39475 cri.go:89] found id: ""
	I1216 05:29:44.566098   39475 logs.go:282] 2 containers: [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278]
	I1216 05:29:44.566161   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.572236   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.576935   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:44.577023   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:44.617243   39475 cri.go:89] found id: ""
	I1216 05:29:44.617279   39475 logs.go:282] 0 containers: []
	W1216 05:29:44.617292   39475 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:44.617300   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:44.617375   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:44.661398   39475 cri.go:89] found id: "c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:44.661428   39475 cri.go:89] found id: ""
	I1216 05:29:44.661440   39475 logs.go:282] 1 containers: [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d]
	I1216 05:29:44.661517   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:44.666474   39475 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:44.666581   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:44.706961   39475 cri.go:89] found id: ""
	I1216 05:29:44.706990   39475 logs.go:282] 0 containers: []
	W1216 05:29:44.707001   39475 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:44.707008   39475 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:44.707075   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:44.752086   39475 cri.go:89] found id: ""
	I1216 05:29:44.752118   39475 logs.go:282] 0 containers: []
	W1216 05:29:44.752129   39475 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:44.752143   39475 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:44.752157   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:44.772382   39475 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:44.772410   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:44.862005   39475 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:44.862028   39475 logs.go:123] Gathering logs for kube-apiserver [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8] ...
	I1216 05:29:44.862044   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:44.908724   39475 logs.go:123] Gathering logs for etcd [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8] ...
	I1216 05:29:44.908758   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:44.959914   39475 logs.go:123] Gathering logs for kube-scheduler [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980] ...
	I1216 05:29:44.959944   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:45.032030   39475 logs.go:123] Gathering logs for kube-controller-manager [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d] ...
	I1216 05:29:45.032070   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:45.071220   39475 logs.go:123] Gathering logs for container status ...
	I1216 05:29:45.071247   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:45.114037   39475 logs.go:123] Gathering logs for kube-scheduler [38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278] ...
	I1216 05:29:45.114065   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:45.154427   39475 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:45.154457   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:45.416016   39475 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:45.416059   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:48.023537   39475 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1216 05:29:48.024210   39475 api_server.go:269] stopped: https://192.168.39.159:8443/healthz: Get "https://192.168.39.159:8443/healthz": dial tcp 192.168.39.159:8443: connect: connection refused
	I1216 05:29:48.024259   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:48.024320   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:48.070331   39475 cri.go:89] found id: "2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:48.070357   39475 cri.go:89] found id: ""
	I1216 05:29:48.070366   39475 logs.go:282] 1 containers: [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8]
	I1216 05:29:48.070433   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.076302   39475 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:48.076369   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:48.126051   39475 cri.go:89] found id: "6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:48.126084   39475 cri.go:89] found id: ""
	I1216 05:29:48.126097   39475 logs.go:282] 1 containers: [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8]
	I1216 05:29:48.126175   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.130863   39475 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:48.130948   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:48.175643   39475 cri.go:89] found id: ""
	I1216 05:29:48.175679   39475 logs.go:282] 0 containers: []
	W1216 05:29:48.175691   39475 logs.go:284] No container was found matching "coredns"
	I1216 05:29:48.175700   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:48.175784   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:48.220234   39475 cri.go:89] found id: "e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:48.220265   39475 cri.go:89] found id: "38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:48.220273   39475 cri.go:89] found id: ""
	I1216 05:29:48.220283   39475 logs.go:282] 2 containers: [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278]
	I1216 05:29:48.220353   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.226419   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.230634   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:48.230707   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:48.279166   39475 cri.go:89] found id: ""
	I1216 05:29:48.279200   39475 logs.go:282] 0 containers: []
	W1216 05:29:48.279215   39475 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:48.279224   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:48.279306   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:48.324590   39475 cri.go:89] found id: "c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:48.324616   39475 cri.go:89] found id: ""
	I1216 05:29:48.324626   39475 logs.go:282] 1 containers: [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d]
	I1216 05:29:48.324700   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:48.329509   39475 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:48.329581   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:48.366574   39475 cri.go:89] found id: ""
	I1216 05:29:48.366611   39475 logs.go:282] 0 containers: []
	W1216 05:29:48.366623   39475 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:48.366631   39475 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:48.366701   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:48.411688   39475 cri.go:89] found id: ""
	I1216 05:29:48.411718   39475 logs.go:282] 0 containers: []
	W1216 05:29:48.411797   39475 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:48.411823   39475 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:48.411841   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 05:29:47.026203   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 05:29:47.026235   43252 machine.go:97] duration metric: took 6.446915161s to provisionDockerMachine
	I1216 05:29:47.026248   43252 start.go:293] postStartSetup for "pause-928970" (driver="kvm2")
	I1216 05:29:47.026258   43252 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 05:29:47.026336   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 05:29:47.029231   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.029660   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.029684   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.029845   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:47.122482   43252 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 05:29:47.128221   43252 info.go:137] Remote host: Buildroot 2025.02
	I1216 05:29:47.128258   43252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/addons for local assets ...
	I1216 05:29:47.128335   43252 filesync.go:126] Scanning /home/jenkins/minikube-integration/22141-5059/.minikube/files for local assets ...
	I1216 05:29:47.128465   43252 filesync.go:149] local asset: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem -> 89872.pem in /etc/ssl/certs
	I1216 05:29:47.128587   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 05:29:47.141501   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem --> /etc/ssl/certs/89872.pem (1708 bytes)
	I1216 05:29:47.172632   43252 start.go:296] duration metric: took 146.369932ms for postStartSetup
	I1216 05:29:47.172681   43252 fix.go:56] duration metric: took 6.596792796s for fixHost
	I1216 05:29:47.175870   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.176322   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.176359   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.176612   43252 main.go:143] libmachine: Using SSH client type: native
	I1216 05:29:47.176893   43252 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1216 05:29:47.176906   43252 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 05:29:47.299899   43252 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765862987.294942981
	
	I1216 05:29:47.299926   43252 fix.go:216] guest clock: 1765862987.294942981
	I1216 05:29:47.299934   43252 fix.go:229] Guest: 2025-12-16 05:29:47.294942981 +0000 UTC Remote: 2025-12-16 05:29:47.172688027 +0000 UTC m=+6.714876738 (delta=122.254954ms)
	I1216 05:29:47.299950   43252 fix.go:200] guest clock delta is within tolerance: 122.254954ms
	I1216 05:29:47.299979   43252 start.go:83] releasing machines lock for "pause-928970", held for 6.724085047s
	I1216 05:29:47.303217   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.303706   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.303733   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.304254   43252 ssh_runner.go:195] Run: cat /version.json
	I1216 05:29:47.304373   43252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 05:29:47.307385   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.307620   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.307874   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.307903   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.308076   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:47.308080   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:47.308117   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:47.308296   43252 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/pause-928970/id_rsa Username:docker}
	I1216 05:29:47.396548   43252 ssh_runner.go:195] Run: systemctl --version
	I1216 05:29:47.423102   43252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 05:29:47.578195   43252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 05:29:47.588351   43252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 05:29:47.588429   43252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 05:29:47.600145   43252 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 05:29:47.600180   43252 start.go:496] detecting cgroup driver to use...
	I1216 05:29:47.600256   43252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 05:29:47.622406   43252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 05:29:47.640566   43252 docker.go:218] disabling cri-docker service (if available) ...
	I1216 05:29:47.640626   43252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 05:29:47.662194   43252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 05:29:47.681343   43252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 05:29:47.868246   43252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 05:29:48.047906   43252 docker.go:234] disabling docker service ...
	I1216 05:29:48.047988   43252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 05:29:48.083622   43252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 05:29:48.105420   43252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 05:29:48.302023   43252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 05:29:48.510783   43252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 05:29:48.534546   43252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 05:29:48.559652   43252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 05:29:48.559766   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.574253   43252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 05:29:48.574328   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.591082   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.606886   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.623372   43252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 05:29:48.638697   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.655862   43252 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.679259   43252 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 05:29:48.758061   43252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 05:29:48.780499   43252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 05:29:48.815471   43252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:29:49.159917   43252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 05:29:49.559414   43252 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 05:29:49.559525   43252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 05:29:49.566263   43252 start.go:564] Will wait 60s for crictl version
	I1216 05:29:49.566357   43252 ssh_runner.go:195] Run: which crictl
	I1216 05:29:49.571120   43252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 05:29:49.604818   43252 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 05:29:49.604898   43252 ssh_runner.go:195] Run: crio --version
	I1216 05:29:49.635906   43252 ssh_runner.go:195] Run: crio --version
	I1216 05:29:49.669084   43252 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	W1216 05:29:45.828024   42564 node_ready.go:57] node "kindnet-764842" has "Ready":"False" status (will retry)
	W1216 05:29:48.329235   42564 node_ready.go:57] node "kindnet-764842" has "Ready":"False" status (will retry)
	I1216 05:29:49.673189   43252 main.go:143] libmachine: domain pause-928970 has defined MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:49.673618   43252 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:25:1a:f6", ip: ""} in network mk-pause-928970: {Iface:virbr3 ExpiryTime:2025-12-16 06:28:34 +0000 UTC Type:0 Mac:52:54:00:25:1a:f6 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:pause-928970 Clientid:01:52:54:00:25:1a:f6}
	I1216 05:29:49.673642   43252 main.go:143] libmachine: domain pause-928970 has defined IP address 192.168.61.105 and MAC address 52:54:00:25:1a:f6 in network mk-pause-928970
	I1216 05:29:49.673856   43252 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 05:29:49.679360   43252 kubeadm.go:884] updating cluster {Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 05:29:49.679560   43252 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 05:29:49.679606   43252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:29:49.735910   43252 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:29:49.735934   43252 crio.go:433] Images already preloaded, skipping extraction
	I1216 05:29:49.735994   43252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 05:29:49.784434   43252 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 05:29:49.784476   43252 cache_images.go:86] Images are preloaded, skipping loading
	I1216 05:29:49.784485   43252 kubeadm.go:935] updating node { 192.168.61.105 8443 v1.34.2 crio true true} ...
	I1216 05:29:49.784602   43252 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-928970 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 05:29:49.784690   43252 ssh_runner.go:195] Run: crio config
	I1216 05:29:49.863510   43252 cni.go:84] Creating CNI manager for ""
	I1216 05:29:49.863532   43252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 05:29:49.863546   43252 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 05:29:49.863564   43252 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-928970 NodeName:pause-928970 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 05:29:49.863722   43252 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-928970"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.105"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 05:29:49.863806   43252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 05:29:49.881746   43252 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 05:29:49.881890   43252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 05:29:49.903924   43252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1216 05:29:49.932868   43252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 05:29:49.977157   43252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1216 05:29:50.016467   43252 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1216 05:29:50.022725   43252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 05:29:50.302708   43252 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 05:29:50.334523   43252 certs.go:69] Setting up /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970 for IP: 192.168.61.105
	I1216 05:29:50.334546   43252 certs.go:195] generating shared ca certs ...
	I1216 05:29:50.334586   43252 certs.go:227] acquiring lock for ca certs: {Name:mkeb038c86653b42975db55bc13142d606c3d109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 05:29:50.334800   43252 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key
	I1216 05:29:50.334867   43252 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key
	I1216 05:29:50.334883   43252 certs.go:257] generating profile certs ...
	I1216 05:29:50.334981   43252 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/client.key
	I1216 05:29:50.335074   43252 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/apiserver.key.d0987635
	I1216 05:29:50.335138   43252 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/proxy-client.key
	I1216 05:29:50.335292   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987.pem (1338 bytes)
	W1216 05:29:50.335339   43252 certs.go:480] ignoring /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987_empty.pem, impossibly tiny 0 bytes
	I1216 05:29:50.335354   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 05:29:50.335390   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/ca.pem (1082 bytes)
	I1216 05:29:50.335438   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/cert.pem (1123 bytes)
	I1216 05:29:50.335473   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/certs/key.pem (1675 bytes)
	I1216 05:29:50.335541   43252 certs.go:484] found cert: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem (1708 bytes)
	I1216 05:29:50.336161   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 05:29:50.381760   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 05:29:50.444260   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 05:29:50.508081   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	W1216 05:29:48.893900   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	W1216 05:29:51.390932   42450 pod_ready.go:104] pod "coredns-66bc5c9577-g8msg" is not "Ready", error: <nil>
	I1216 05:29:48.695793   39475 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:48.695847   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:48.804256   39475 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:48.804331   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:48.820540   39475 logs.go:123] Gathering logs for kube-apiserver [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8] ...
	I1216 05:29:48.820577   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:48.866600   39475 logs.go:123] Gathering logs for kube-scheduler [38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278] ...
	I1216 05:29:48.866636   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:48.918637   39475 logs.go:123] Gathering logs for container status ...
	I1216 05:29:48.918674   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:48.970382   39475 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:48.970425   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:49.061437   39475 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:49.061465   39475 logs.go:123] Gathering logs for etcd [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8] ...
	I1216 05:29:49.061479   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:49.109236   39475 logs.go:123] Gathering logs for kube-scheduler [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980] ...
	I1216 05:29:49.109269   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:49.193120   39475 logs.go:123] Gathering logs for kube-controller-manager [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d] ...
	I1216 05:29:49.193158   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:51.735891   39475 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I1216 05:29:51.736612   39475 api_server.go:269] stopped: https://192.168.39.159:8443/healthz: Get "https://192.168.39.159:8443/healthz": dial tcp 192.168.39.159:8443: connect: connection refused
	I1216 05:29:51.736681   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 05:29:51.736743   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 05:29:51.791765   39475 cri.go:89] found id: "2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:51.791813   39475 cri.go:89] found id: ""
	I1216 05:29:51.791823   39475 logs.go:282] 1 containers: [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8]
	I1216 05:29:51.791894   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:51.796455   39475 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 05:29:51.796566   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 05:29:51.845469   39475 cri.go:89] found id: "6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:51.845496   39475 cri.go:89] found id: ""
	I1216 05:29:51.845508   39475 logs.go:282] 1 containers: [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8]
	I1216 05:29:51.845575   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:51.850314   39475 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 05:29:51.850401   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 05:29:51.897824   39475 cri.go:89] found id: ""
	I1216 05:29:51.897856   39475 logs.go:282] 0 containers: []
	W1216 05:29:51.897870   39475 logs.go:284] No container was found matching "coredns"
	I1216 05:29:51.897878   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 05:29:51.897940   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 05:29:51.945193   39475 cri.go:89] found id: "e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:51.945229   39475 cri.go:89] found id: "38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:51.945237   39475 cri.go:89] found id: ""
	I1216 05:29:51.945248   39475 logs.go:282] 2 containers: [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278]
	I1216 05:29:51.945320   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:51.950461   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:51.955149   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 05:29:51.955225   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 05:29:52.004855   39475 cri.go:89] found id: ""
	I1216 05:29:52.004893   39475 logs.go:282] 0 containers: []
	W1216 05:29:52.004902   39475 logs.go:284] No container was found matching "kube-proxy"
	I1216 05:29:52.004908   39475 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 05:29:52.004972   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 05:29:52.048232   39475 cri.go:89] found id: "c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:52.048257   39475 cri.go:89] found id: ""
	I1216 05:29:52.048267   39475 logs.go:282] 1 containers: [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d]
	I1216 05:29:52.048337   39475 ssh_runner.go:195] Run: which crictl
	I1216 05:29:52.053256   39475 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 05:29:52.053335   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 05:29:52.091221   39475 cri.go:89] found id: ""
	I1216 05:29:52.091254   39475 logs.go:282] 0 containers: []
	W1216 05:29:52.091263   39475 logs.go:284] No container was found matching "kindnet"
	I1216 05:29:52.091268   39475 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1216 05:29:52.091328   39475 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1216 05:29:52.132637   39475 cri.go:89] found id: ""
	I1216 05:29:52.132667   39475 logs.go:282] 0 containers: []
	W1216 05:29:52.132678   39475 logs.go:284] No container was found matching "storage-provisioner"
	I1216 05:29:52.132694   39475 logs.go:123] Gathering logs for container status ...
	I1216 05:29:52.132705   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 05:29:52.184480   39475 logs.go:123] Gathering logs for kubelet ...
	I1216 05:29:52.184509   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 05:29:52.290581   39475 logs.go:123] Gathering logs for etcd [6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8] ...
	I1216 05:29:52.290620   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6fc82b798a8a29f8b141af04e69d134df9015e4c414ed0630acd2b63ac2303a8"
	I1216 05:29:52.333635   39475 logs.go:123] Gathering logs for kube-scheduler [e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980] ...
	I1216 05:29:52.333665   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e760fb234851bd5e59f856f0e0c18f3d161121dd9b9e57ac6bc28167fc7b0980"
	I1216 05:29:52.416325   39475 logs.go:123] Gathering logs for dmesg ...
	I1216 05:29:52.416363   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 05:29:52.434323   39475 logs.go:123] Gathering logs for describe nodes ...
	I1216 05:29:52.434351   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 05:29:52.509084   39475 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 05:29:52.509109   39475 logs.go:123] Gathering logs for kube-apiserver [2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8] ...
	I1216 05:29:52.509124   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b77e8379a3a697fc15cf52d5255f52b61da3c25f2212f88af602d291d98c6f8"
	I1216 05:29:52.547487   39475 logs.go:123] Gathering logs for kube-scheduler [38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278] ...
	I1216 05:29:52.547519   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38282c727098289f47850c75d78f900d28fc5e89217141ed00adc37a4450e278"
	I1216 05:29:52.585005   39475 logs.go:123] Gathering logs for kube-controller-manager [c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d] ...
	I1216 05:29:52.585041   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c973dca21526d2dc388f73b2de1ba61226f43a188d0f772aacf084109cb8b13d"
	I1216 05:29:52.621193   39475 logs.go:123] Gathering logs for CRI-O ...
	I1216 05:29:52.621221   39475 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1216 05:29:50.827802   42564 node_ready.go:57] node "kindnet-764842" has "Ready":"False" status (will retry)
	W1216 05:29:53.327854   42564 node_ready.go:57] node "kindnet-764842" has "Ready":"False" status (will retry)
	I1216 05:29:50.544161   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 05:29:50.582491   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 05:29:50.622980   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 05:29:50.658191   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/pause-928970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 05:29:50.699338   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/ssl/certs/89872.pem --> /usr/share/ca-certificates/89872.pem (1708 bytes)
	I1216 05:29:50.740697   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 05:29:50.777649   43252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22141-5059/.minikube/certs/8987.pem --> /usr/share/ca-certificates/8987.pem (1338 bytes)
	I1216 05:29:50.834711   43252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 05:29:50.861392   43252 ssh_runner.go:195] Run: openssl version
	I1216 05:29:50.870750   43252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.884821   43252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89872.pem /etc/ssl/certs/89872.pem
	I1216 05:29:50.904724   43252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.912864   43252 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 04:37 /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.912927   43252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89872.pem
	I1216 05:29:50.922144   43252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 05:29:50.935405   43252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.949802   43252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 05:29:50.963427   43252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.969638   43252 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 04:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.969710   43252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 05:29:50.977853   43252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 05:29:50.990893   43252 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.005818   43252 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8987.pem /etc/ssl/certs/8987.pem
	I1216 05:29:51.025691   43252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.031718   43252 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 04:37 /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.031835   43252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8987.pem
	I1216 05:29:51.040391   43252 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 05:29:51.053099   43252 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 05:29:51.059019   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 05:29:51.066854   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 05:29:51.074813   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 05:29:51.083852   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 05:29:51.091563   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 05:29:51.099164   43252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 05:29:51.107806   43252 kubeadm.go:401] StartCluster: {Name:pause-928970 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-928970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 05:29:51.107968   43252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 05:29:51.108053   43252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 05:29:51.150456   43252 cri.go:89] found id: "29bd28649adb8ca63f35f0b053bb0dd532c708c54a4ad724619c9b19b6e7150a"
	I1216 05:29:51.150481   43252 cri.go:89] found id: "37c942e93fe48b66e636792ab0c4a77e93ff849f2b8d640bf766dadb72f83226"
	I1216 05:29:51.150487   43252 cri.go:89] found id: "0ab96910b4685c9b0410dc42540b2e90762243ce3f6800ef8cff6557b3d871e5"
	I1216 05:29:51.150492   43252 cri.go:89] found id: "9f0d95736680c2ffbc4e899e42fffb5fd1ac65fc1b25940e63655787677f2080"
	I1216 05:29:51.150497   43252 cri.go:89] found id: "a95e9d2ccb008cb76b2ebe94260cafcbda0c65691f7771958ed0570c4afd2ef7"
	I1216 05:29:51.150501   43252 cri.go:89] found id: "539bd161320a69e88f0b2fcf03c491b266c439e6ead23d23286225fddab771d1"
	I1216 05:29:51.150506   43252 cri.go:89] found id: "f78fb917ba291b21273e76ea6d97d134329de65253f00fa34617225403819dc7"
	I1216 05:29:51.150510   43252 cri.go:89] found id: "f89d241b5e494c7bd2f78c1e860377bcffa70799bc629fe2b3b3142894e4900a"
	I1216 05:29:51.150516   43252 cri.go:89] found id: ""
	I1216 05:29:51.150574   43252 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-928970 -n pause-928970
helpers_test.go:270: (dbg) Run:  kubectl --context pause-928970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (33.54s)

                                                
                                    

Test pass (376/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 25.1
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.2/json-events 10.95
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.15
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-beta.0/json-events 10.96
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.14
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.15
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.63
31 TestOffline 121.19
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 134.14
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/serial/GCPAuth/FakeCredentials 11.52
44 TestAddons/parallel/Registry 18.89
45 TestAddons/parallel/RegistryCreds 0.71
47 TestAddons/parallel/InspektorGadget 11.74
48 TestAddons/parallel/MetricsServer 6.83
50 TestAddons/parallel/CSI 49.18
51 TestAddons/parallel/Headlamp 20.2
52 TestAddons/parallel/CloudSpanner 6.55
53 TestAddons/parallel/LocalPath 55.62
54 TestAddons/parallel/NvidiaDevicePlugin 7.01
55 TestAddons/parallel/Yakd 12.29
57 TestAddons/StoppedEnableDisable 86.26
58 TestCertOptions 47.55
59 TestCertExpiration 297.82
61 TestForceSystemdFlag 51.66
62 TestForceSystemdEnv 43.23
67 TestErrorSpam/setup 36.2
68 TestErrorSpam/start 0.33
69 TestErrorSpam/status 0.66
70 TestErrorSpam/pause 1.56
71 TestErrorSpam/unpause 1.8
72 TestErrorSpam/stop 4.63
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 50.06
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 51.02
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.17
84 TestFunctional/serial/CacheCmd/cache/add_local 2.27
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 42.47
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.32
95 TestFunctional/serial/LogsFileCmd 1.32
96 TestFunctional/serial/InvalidService 4.06
98 TestFunctional/parallel/ConfigCmd 0.41
99 TestFunctional/parallel/DashboardCmd 13.88
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.11
102 TestFunctional/parallel/StatusCmd 0.67
106 TestFunctional/parallel/ServiceCmdConnect 24.65
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 41.61
110 TestFunctional/parallel/SSHCmd 0.33
111 TestFunctional/parallel/CpCmd 1.11
112 TestFunctional/parallel/MySQL 31.6
113 TestFunctional/parallel/FileSync 0.17
114 TestFunctional/parallel/CertSync 1.09
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
122 TestFunctional/parallel/License 0.39
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
127 TestFunctional/parallel/ImageCommands/ImageBuild 5.75
128 TestFunctional/parallel/ImageCommands/Setup 2.03
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.27
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.28
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10.88
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.99
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
148 TestFunctional/parallel/ServiceCmd/DeployApp 16.24
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
150 TestFunctional/parallel/ProfileCmd/profile_list 0.43
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
152 TestFunctional/parallel/MountCmd/any-port 7.94
153 TestFunctional/parallel/MountCmd/specific-port 1.32
154 TestFunctional/parallel/ServiceCmd/List 1.29
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.01
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.24
157 TestFunctional/parallel/Version/short 0.07
158 TestFunctional/parallel/Version/components 0.64
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
160 TestFunctional/parallel/ServiceCmd/Format 0.32
161 TestFunctional/parallel/ServiceCmd/URL 0.33
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 74.23
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 38.88
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.08
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.14
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.18
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.52
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 35.54
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.25
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.29
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.32
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.42
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 39.28
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.25
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.68
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 11.45
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 41.64
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.38
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.2
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 33.53
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.2
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.08
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.32
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.37
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 9.23
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.3
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.3
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.3
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 8.93
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.22
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.21
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.24
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.25
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.23
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.42
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.19
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.21
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.28
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.19
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 15.03
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.93
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.62
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.32
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.07
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.15
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.01
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.87
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 1.38
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.72
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.62
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.55
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.01
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.01
261 TestMultiControlPlane/serial/StartCluster 191.95
262 TestMultiControlPlane/serial/DeployApp 7.08
263 TestMultiControlPlane/serial/PingHostFromPods 1.31
264 TestMultiControlPlane/serial/AddWorkerNode 45.69
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
267 TestMultiControlPlane/serial/CopyFile 10.63
268 TestMultiControlPlane/serial/StopSecondaryNode 87.94
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
270 TestMultiControlPlane/serial/RestartSecondaryNode 31.17
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.8
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 376
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.08
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
275 TestMultiControlPlane/serial/StopCluster 253.9
276 TestMultiControlPlane/serial/RestartCluster 99.79
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.5
278 TestMultiControlPlane/serial/AddSecondaryNode 75.86
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
284 TestJSONOutput/start/Command 76.75
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.73
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.64
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.95
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.22
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 83.88
316 TestMountStart/serial/StartWithMountFirst 20.08
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 19.93
319 TestMountStart/serial/VerifyMountSecond 0.32
320 TestMountStart/serial/DeleteFirst 0.71
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.31
323 TestMountStart/serial/RestartStopped 21.82
324 TestMountStart/serial/VerifyMountPostStop 0.32
327 TestMultiNode/serial/FreshStart2Nodes 100.19
328 TestMultiNode/serial/DeployApp2Nodes 6.11
329 TestMultiNode/serial/PingHostFrom2Pods 0.9
330 TestMultiNode/serial/AddNode 48.59
331 TestMultiNode/serial/MultiNodeLabels 0.07
332 TestMultiNode/serial/ProfileList 0.47
333 TestMultiNode/serial/CopyFile 6.11
334 TestMultiNode/serial/StopNode 2.36
335 TestMultiNode/serial/StartAfterStop 42.25
336 TestMultiNode/serial/RestartKeepsNodes 286.12
337 TestMultiNode/serial/DeleteNode 2.66
338 TestMultiNode/serial/StopMultiNode 167.86
339 TestMultiNode/serial/RestartMultiNode 88.14
340 TestMultiNode/serial/ValidateNameConflict 43.26
347 TestScheduledStopUnix 110.67
351 TestRunningBinaryUpgrade 333.49
353 TestKubernetesUpgrade 295.24
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestNoKubernetes/serial/StartWithK8s 103.86
358 TestNoKubernetes/serial/StartWithStopK8s 27.5
359 TestNoKubernetes/serial/Start 41.87
360 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
361 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
362 TestNoKubernetes/serial/ProfileList 1.01
363 TestNoKubernetes/serial/Stop 1.35
364 TestNoKubernetes/serial/StartNoArgs 41.2
365 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
366 TestStoppedBinaryUpgrade/Setup 4
367 TestStoppedBinaryUpgrade/Upgrade 119.57
375 TestNetworkPlugins/group/false 3.8
379 TestISOImage/Setup 41.43
388 TestPause/serial/Start 87.4
389 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
390 TestNetworkPlugins/group/auto/Start 102.77
392 TestISOImage/Binaries/crictl 0.21
393 TestISOImage/Binaries/curl 0.22
394 TestISOImage/Binaries/docker 0.22
395 TestISOImage/Binaries/git 0.2
396 TestISOImage/Binaries/iptables 0.19
397 TestISOImage/Binaries/podman 0.21
398 TestISOImage/Binaries/rsync 0.23
399 TestISOImage/Binaries/socat 0.21
400 TestISOImage/Binaries/wget 0.21
401 TestISOImage/Binaries/VBoxControl 0.22
402 TestISOImage/Binaries/VBoxService 0.22
403 TestNetworkPlugins/group/kindnet/Start 98.77
405 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
406 TestNetworkPlugins/group/auto/KubeletFlags 0.2
407 TestNetworkPlugins/group/auto/NetCatPod 11.3
408 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
409 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
410 TestNetworkPlugins/group/auto/DNS 0.17
411 TestNetworkPlugins/group/auto/Localhost 0.14
412 TestNetworkPlugins/group/auto/HairPin 0.14
413 TestNetworkPlugins/group/calico/Start 75.73
414 TestNetworkPlugins/group/kindnet/DNS 0.16
415 TestNetworkPlugins/group/kindnet/Localhost 0.17
416 TestNetworkPlugins/group/kindnet/HairPin 0.14
417 TestNetworkPlugins/group/custom-flannel/Start 85.74
418 TestNetworkPlugins/group/enable-default-cni/Start 118.99
419 TestNetworkPlugins/group/calico/ControllerPod 6.01
420 TestNetworkPlugins/group/calico/KubeletFlags 0.2
421 TestNetworkPlugins/group/calico/NetCatPod 11.29
422 TestNetworkPlugins/group/flannel/Start 73.6
423 TestNetworkPlugins/group/calico/DNS 0.24
424 TestNetworkPlugins/group/calico/Localhost 0.19
425 TestNetworkPlugins/group/calico/HairPin 0.15
426 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
427 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.02
428 TestNetworkPlugins/group/bridge/Start 60.54
429 TestNetworkPlugins/group/custom-flannel/DNS 0.23
430 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
431 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
433 TestStartStop/group/old-k8s-version/serial/FirstStart 101.28
434 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
435 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.37
436 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
437 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
438 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
439 TestNetworkPlugins/group/flannel/ControllerPod 6.02
441 TestStartStop/group/no-preload/serial/FirstStart 98.14
442 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
443 TestNetworkPlugins/group/flannel/NetCatPod 11.35
444 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
445 TestNetworkPlugins/group/bridge/NetCatPod 12.32
446 TestNetworkPlugins/group/flannel/DNS 0.19
447 TestNetworkPlugins/group/flannel/Localhost 0.16
448 TestNetworkPlugins/group/flannel/HairPin 0.16
449 TestNetworkPlugins/group/bridge/DNS 0.17
450 TestNetworkPlugins/group/bridge/Localhost 0.12
451 TestNetworkPlugins/group/bridge/HairPin 0.15
453 TestStartStop/group/embed-certs/serial/FirstStart 86.47
455 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 101.31
456 TestStartStop/group/old-k8s-version/serial/DeployApp 12.39
457 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.73
458 TestStartStop/group/old-k8s-version/serial/Stop 78.98
459 TestStartStop/group/no-preload/serial/DeployApp 10.34
460 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
461 TestStartStop/group/no-preload/serial/Stop 88.57
462 TestStartStop/group/embed-certs/serial/DeployApp 11.28
463 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
464 TestStartStop/group/embed-certs/serial/Stop 87.17
465 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.31
466 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
467 TestStartStop/group/default-k8s-diff-port/serial/Stop 88.73
468 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
469 TestStartStop/group/old-k8s-version/serial/SecondStart 44.79
470 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
471 TestStartStop/group/no-preload/serial/SecondStart 57.19
472 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
473 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
474 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
475 TestStartStop/group/embed-certs/serial/SecondStart 47.81
476 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
477 TestStartStop/group/old-k8s-version/serial/Pause 2.95
479 TestStartStop/group/newest-cni/serial/FirstStart 60.51
480 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
481 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 70.63
482 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.06
483 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.01
484 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
485 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
486 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
487 TestStartStop/group/no-preload/serial/Pause 3.11
488 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
489 TestStartStop/group/embed-certs/serial/Pause 3.59
491 TestISOImage/PersistentMounts//data 0.22
492 TestISOImage/PersistentMounts//var/lib/docker 0.23
493 TestISOImage/PersistentMounts//var/lib/cni 0.22
494 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
495 TestISOImage/PersistentMounts//var/lib/minikube 0.2
496 TestISOImage/PersistentMounts//var/lib/toolbox 0.22
497 TestISOImage/PersistentMounts//var/lib/boot2docker 0.23
498 TestISOImage/VersionJSON 0.21
499 TestISOImage/eBPFSupport 0.38
500 TestStartStop/group/newest-cni/serial/DeployApp 0
501 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
502 TestStartStop/group/newest-cni/serial/Stop 8.91
503 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
504 TestStartStop/group/newest-cni/serial/SecondStart 32.31
505 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.01
506 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
507 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
508 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.69
509 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
510 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
511 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
512 TestStartStop/group/newest-cni/serial/Pause 3.58
x
+
TestDownloadOnly/v1.28.0/json-events (25.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-973101 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-973101 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.099193221s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (25.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1216 04:25:47.617647    8987 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1216 04:25:47.617754    8987 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-973101
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-973101: exit status 85 (75.053525ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-973101 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-973101 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:25:22
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:25:22.571395    9000 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:25:22.571673    9000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:22.571683    9000 out.go:374] Setting ErrFile to fd 2...
	I1216 04:25:22.571688    9000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:22.571943    9000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	W1216 04:25:22.572116    9000 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22141-5059/.minikube/config/config.json: open /home/jenkins/minikube-integration/22141-5059/.minikube/config/config.json: no such file or directory
	I1216 04:25:22.572675    9000 out.go:368] Setting JSON to true
	I1216 04:25:22.573607    9000 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":465,"bootTime":1765858658,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:25:22.573665    9000 start.go:143] virtualization: kvm guest
	I1216 04:25:22.578328    9000 out.go:99] [download-only-973101] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1216 04:25:22.578447    9000 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 04:25:22.578488    9000 notify.go:221] Checking for updates...
	I1216 04:25:22.579566    9000 out.go:171] MINIKUBE_LOCATION=22141
	I1216 04:25:22.580705    9000 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:25:22.581999    9000 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 04:25:22.583134    9000 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:25:22.584217    9000 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 04:25:22.586244    9000 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:25:22.586511    9000 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:25:23.037959    9000 out.go:99] Using the kvm2 driver based on user configuration
	I1216 04:25:23.037995    9000 start.go:309] selected driver: kvm2
	I1216 04:25:23.038002    9000 start.go:927] validating driver "kvm2" against <nil>
	I1216 04:25:23.038339    9000 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:25:23.038903    9000 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1216 04:25:23.039085    9000 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:25:23.039109    9000 cni.go:84] Creating CNI manager for ""
	I1216 04:25:23.039176    9000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 04:25:23.039187    9000 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 04:25:23.039251    9000 start.go:353] cluster config:
	{Name:download-only-973101 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-973101 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:25:23.039497    9000 iso.go:125] acquiring lock: {Name:mk32a15185e6e6998579c2a7c92376b162445713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:25:23.041007    9000 out.go:99] Downloading VM boot image ...
	I1216 04:25:23.041064    9000 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22141-5059/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1216 04:25:34.486850    9000 out.go:99] Starting "download-only-973101" primary control-plane node in "download-only-973101" cluster
	I1216 04:25:34.486919    9000 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 04:25:34.592047    9000 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1216 04:25:34.592089    9000 cache.go:65] Caching tarball of preloaded images
	I1216 04:25:34.592299    9000 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 04:25:34.594006    9000 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1216 04:25:34.594025    9000 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1216 04:25:34.704081    9000 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1216 04:25:34.704209    9000 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1216 04:25:46.341510    9000 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1216 04:25:46.341852    9000 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/download-only-973101/config.json ...
	I1216 04:25:46.341880    9000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/download-only-973101/config.json: {Name:mk0709e63583044dff54a24eca95b4def09afdd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:46.342038    9000 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 04:25:46.342270    9000 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22141-5059/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-973101 host does not exist
	  To start a cluster, run: "minikube start -p download-only-973101"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-973101
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (10.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-667568 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-667568 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.94815017s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (10.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1216 04:25:58.924192    8987 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 04:25:58.924237    8987 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-667568
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-667568: exit status 85 (69.46982ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-973101 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-973101 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-973101                                                                                                                                                 │ download-only-973101 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ -o=json --download-only -p download-only-667568 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-667568 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:25:48
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:25:48.025196    9261 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:25:48.025296    9261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:48.025302    9261 out.go:374] Setting ErrFile to fd 2...
	I1216 04:25:48.025308    9261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:48.025483    9261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 04:25:48.025964    9261 out.go:368] Setting JSON to true
	I1216 04:25:48.026724    9261 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":490,"bootTime":1765858658,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:25:48.026790    9261 start.go:143] virtualization: kvm guest
	I1216 04:25:48.028664    9261 out.go:99] [download-only-667568] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:25:48.028850    9261 notify.go:221] Checking for updates...
	I1216 04:25:48.029885    9261 out.go:171] MINIKUBE_LOCATION=22141
	I1216 04:25:48.030953    9261 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:25:48.031971    9261 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 04:25:48.032964    9261 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:25:48.033929    9261 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 04:25:48.035791    9261 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:25:48.036016    9261 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:25:48.064143    9261 out.go:99] Using the kvm2 driver based on user configuration
	I1216 04:25:48.064169    9261 start.go:309] selected driver: kvm2
	I1216 04:25:48.064174    9261 start.go:927] validating driver "kvm2" against <nil>
	I1216 04:25:48.064487    9261 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:25:48.064993    9261 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1216 04:25:48.065598    9261 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:25:48.065622    9261 cni.go:84] Creating CNI manager for ""
	I1216 04:25:48.065669    9261 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 04:25:48.065678    9261 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 04:25:48.065712    9261 start.go:353] cluster config:
	{Name:download-only-667568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-667568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:25:48.065813    9261 iso.go:125] acquiring lock: {Name:mk32a15185e6e6998579c2a7c92376b162445713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:25:48.066959    9261 out.go:99] Starting "download-only-667568" primary control-plane node in "download-only-667568" cluster
	I1216 04:25:48.066976    9261 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:25:48.167460    9261 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 04:25:48.167496    9261 cache.go:65] Caching tarball of preloaded images
	I1216 04:25:48.167677    9261 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:25:48.169231    9261 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1216 04:25:48.169290    9261 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1216 04:25:48.283659    9261 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1216 04:25:48.283700    9261 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 04:25:57.844154    9261 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 04:25:57.844517    9261 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/download-only-667568/config.json ...
	I1216 04:25:57.844545    9261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/download-only-667568/config.json: {Name:mk332921006b0e54ef04bf6a1127b5dcf3dacd4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:25:57.844702    9261 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 04:25:57.844850    9261 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22141-5059/.minikube/cache/linux/amd64/v1.34.2/kubectl
	
	
	* The control-plane node download-only-667568 host does not exist
	  To start a cluster, run: "minikube start -p download-only-667568"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-667568
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (10.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-292678 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-292678 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.955160189s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (10.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1216 04:26:10.231790    8987 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1216 04:26:10.231841    8987 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-292678
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-292678: exit status 85 (143.598031ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-973101 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-973101 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-973101                                                                                                                                                        │ download-only-973101 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ -o=json --download-only -p download-only-667568 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-667568 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ delete  │ -p download-only-667568                                                                                                                                                        │ download-only-667568 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │ 16 Dec 25 04:25 UTC │
	│ start   │ -o=json --download-only -p download-only-292678 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-292678 │ jenkins │ v1.37.0 │ 16 Dec 25 04:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 04:25:59
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 04:25:59.325242    9472 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:25:59.325362    9472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:59.325371    9472 out.go:374] Setting ErrFile to fd 2...
	I1216 04:25:59.325375    9472 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:25:59.325551    9472 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 04:25:59.325980    9472 out.go:368] Setting JSON to true
	I1216 04:25:59.326740    9472 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":501,"bootTime":1765858658,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:25:59.326803    9472 start.go:143] virtualization: kvm guest
	I1216 04:25:59.328616    9472 out.go:99] [download-only-292678] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:25:59.328764    9472 notify.go:221] Checking for updates...
	I1216 04:25:59.330034    9472 out.go:171] MINIKUBE_LOCATION=22141
	I1216 04:25:59.331352    9472 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:25:59.332389    9472 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 04:25:59.333378    9472 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:25:59.334443    9472 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 04:25:59.336459    9472 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 04:25:59.336710    9472 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:25:59.365599    9472 out.go:99] Using the kvm2 driver based on user configuration
	I1216 04:25:59.365637    9472 start.go:309] selected driver: kvm2
	I1216 04:25:59.365642    9472 start.go:927] validating driver "kvm2" against <nil>
	I1216 04:25:59.365969    9472 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 04:25:59.366416    9472 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1216 04:25:59.366569    9472 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 04:25:59.366600    9472 cni.go:84] Creating CNI manager for ""
	I1216 04:25:59.366676    9472 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 04:25:59.366686    9472 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 04:25:59.366723    9472 start.go:353] cluster config:
	{Name:download-only-292678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-292678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:25:59.366822    9472 iso.go:125] acquiring lock: {Name:mk32a15185e6e6998579c2a7c92376b162445713 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 04:25:59.367961    9472 out.go:99] Starting "download-only-292678" primary control-plane node in "download-only-292678" cluster
	I1216 04:25:59.367976    9472 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:25:59.479409    9472 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1216 04:25:59.479436    9472 cache.go:65] Caching tarball of preloaded images
	I1216 04:25:59.479595    9472 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:25:59.481241    9472 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1216 04:25:59.481259    9472 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1216 04:25:59.589329    9472 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1216 04:25:59.589369    9472 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22141-5059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1216 04:26:08.200808    9472 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1216 04:26:08.201131    9472 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/download-only-292678/config.json ...
	I1216 04:26:08.201162    9472 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/download-only-292678/config.json: {Name:mk665d1a4f084cd2e321ef0a5508d8bea30de0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 04:26:08.201334    9472 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 04:26:08.201467    9472 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22141-5059/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-292678 host does not exist
	  To start a cluster, run: "minikube start -p download-only-292678"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-292678
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 04:26:11.080879    8987 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-194309 --alsologtostderr --binary-mirror http://127.0.0.1:44661 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-194309" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-194309
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (121.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-124712 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-124712 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (2m0.262957093s)
helpers_test.go:176: Cleaning up "offline-crio-124712" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-124712
--- PASS: TestOffline (121.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-153066
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-153066: exit status 85 (60.864231ms)

                                                
                                                
-- stdout --
	* Profile "addons-153066" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-153066"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-153066
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-153066: exit status 85 (61.465468ms)

                                                
                                                
-- stdout --
	* Profile "addons-153066" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-153066"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (134.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-153066 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-153066 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m14.139691829s)
--- PASS: TestAddons/Setup (134.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-153066 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-153066 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-153066 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-153066 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [e064416f-1c71-491d-b296-b0861bd3abce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [e064416f-1c71-491d-b296-b0861bd3abce] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004583908s
addons_test.go:696: (dbg) Run:  kubectl --context addons-153066 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-153066 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-153066 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.569869ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-bxf9q" [afd4c327-e7bf-4429-ad65-493431f56200] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005533175s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-pbkbs" [0bced886-f1b7-415e-91d0-5f533bcfe8c0] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003544422s
addons_test.go:394: (dbg) Run:  kubectl --context addons-153066 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-153066 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-153066 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.083838977s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 ip
2025/12/16 04:29:05 [DEBUG] GET http://192.168.39.189:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.89s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 6.108872ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-153066
addons_test.go:334: (dbg) Run:  kubectl --context addons-153066 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-hg8m7" [1cfe5047-1562-4f84-b845-265986047922] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003884309s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 addons disable inspektor-gadget --alsologtostderr -v=1: (5.73371774s)
--- PASS: TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 46.395079ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-qm9rk" [0ea4c9ef-e70d-4d40-8e23-271dbeeb59b9] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00447279s
addons_test.go:465: (dbg) Run:  kubectl --context addons-153066 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 04:28:59.109220    8987 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 04:28:59.113723    8987 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 04:28:59.113741    8987 kapi.go:107] duration metric: took 4.533432ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.543144ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-153066 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-153066 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [415aef58-3211-42e8-8488-b31cd84d8b71] Pending
helpers_test.go:353: "task-pv-pod" [415aef58-3211-42e8-8488-b31cd84d8b71] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [415aef58-3211-42e8-8488-b31cd84d8b71] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.005025113s
addons_test.go:574: (dbg) Run:  kubectl --context addons-153066 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-153066 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-153066 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-153066 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-153066 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-153066 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-153066 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [52cdea54-d977-4cc9-94fa-29b50cd39a61] Pending
helpers_test.go:353: "task-pv-pod-restore" [52cdea54-d977-4cc9-94fa-29b50cd39a61] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.00385332s
addons_test.go:616: (dbg) Run:  kubectl --context addons-153066 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-153066 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-153066 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.846130401s)
--- PASS: TestAddons/parallel/CSI (49.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-153066 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-7gpgl" [edc4b566-7c2a-4c67-9094-9998b6b3a35d] Pending
helpers_test.go:353: "headlamp-dfcdc64b-7gpgl" [edc4b566-7c2a-4c67-9094-9998b6b3a35d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-7gpgl" [edc4b566-7c2a-4c67-9094-9998b6b3a35d] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.006874418s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 addons disable headlamp --alsologtostderr -v=1: (6.368252484s)
--- PASS: TestAddons/parallel/Headlamp (20.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-l2q5t" [90f31d4b-42bd-4d76-b144-963ec95f7d58] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00406916s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-153066 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-153066 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-153066 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [58723eef-9d11-432e-a180-0f6864226347] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [58723eef-9d11-432e-a180-0f6864226347] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [58723eef-9d11-432e-a180-0f6864226347] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003601306s
addons_test.go:969: (dbg) Run:  kubectl --context addons-153066 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 ssh "cat /opt/local-path-provisioner/pvc-f15dac49-fd5a-496e-bac7-888f900e7fe3_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-153066 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-153066 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.836447505s)
--- PASS: TestAddons/parallel/LocalPath (55.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.01s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-z4dn4" [3c096eaf-758d-432e-81f4-c8dfdd7b23cb] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008774806s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.000831763s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.01s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-qlkdk" [33571d72-c08f-4871-9b6e-23fda516aa27] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005539331s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-153066 addons disable yakd --alsologtostderr -v=1: (6.278853319s)
--- PASS: TestAddons/parallel/Yakd (12.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (86.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-153066
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-153066: (1m26.069021542s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-153066
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-153066
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-153066
--- PASS: TestAddons/StoppedEnableDisable (86.26s)

                                                
                                    
x
+
TestCertOptions (47.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-995178 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-995178 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (46.289736247s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-995178 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-995178 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-995178 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-995178" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-995178
--- PASS: TestCertOptions (47.55s)

                                                
                                    
x
+
TestCertExpiration (297.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-843108 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1216 05:23:27.159797    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-843108 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m21.747369348s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-843108 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-843108 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (35.126839673s)
helpers_test.go:176: Cleaning up "cert-expiration-843108" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-843108
--- PASS: TestCertExpiration (297.82s)

                                                
                                    
x
+
TestForceSystemdFlag (51.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-257338 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-257338 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (50.584420842s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-257338 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-257338" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-257338
--- PASS: TestForceSystemdFlag (51.66s)

                                                
                                    
x
+
TestForceSystemdEnv (43.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-415004 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-415004 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.570978718s)
helpers_test.go:176: Cleaning up "force-systemd-env-415004" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-415004
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-415004: (1.663125256s)
--- PASS: TestForceSystemdEnv (43.23s)

                                                
                                    
x
+
TestErrorSpam/setup (36.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-203048 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-203048 --driver=kvm2  --container-runtime=crio
E1216 04:33:27.160391    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:27.166812    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:27.178320    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:27.199694    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:27.241256    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:27.322752    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:27.484313    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:27.806063    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:28.448255    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:29.729813    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:32.291685    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:33:37.413277    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-203048 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-203048 --driver=kvm2  --container-runtime=crio: (36.202530386s)
--- PASS: TestErrorSpam/setup (36.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 status
--- PASS: TestErrorSpam/status (0.66s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (4.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 stop
E1216 04:33:47.654565    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 stop: (2.208727799s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 stop: (1.341953961s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-203048 --log_dir /tmp/nospam-203048 stop: (1.081381541s)
--- PASS: TestErrorSpam/stop (4.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/test/nested/copy/8987/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-448088 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1216 04:34:08.136324    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-448088 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (50.062173572s)
--- PASS: TestFunctional/serial/StartWithProxy (50.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (51.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 04:34:42.448642    8987 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-448088 --alsologtostderr -v=8
E1216 04:34:49.098074    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-448088 --alsologtostderr -v=8: (51.017208622s)
functional_test.go:678: soft start took 51.018024764s for "functional-448088" cluster.
I1216 04:35:33.466258    8987 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (51.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-448088 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 cache add registry.k8s.io/pause:3.1: (1.007740268s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 cache add registry.k8s.io/pause:3.3: (1.093994812s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 cache add registry.k8s.io/pause:latest: (1.066686272s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-448088 /tmp/TestFunctionalserialCacheCmdcacheadd_local2332112840/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 cache add minikube-local-cache-test:functional-448088
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 cache add minikube-local-cache-test:functional-448088: (1.928276223s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 cache delete minikube-local-cache-test:functional-448088
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-448088
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (168.847351ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 kubectl -- --context functional-448088 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-448088 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-448088 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 04:36:11.021946    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-448088 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.468596013s)
functional_test.go:776: restart took 42.468714191s for "functional-448088" cluster.
I1216 04:36:23.703034    8987 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (42.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-448088 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 logs: (1.319482719s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 logs --file /tmp/TestFunctionalserialLogsFileCmd2517470833/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 logs --file /tmp/TestFunctionalserialLogsFileCmd2517470833/001/logs.txt: (1.322340861s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-448088 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-448088
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-448088: exit status 115 (229.932249ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.162:30317 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-448088 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 config get cpus: exit status 14 (66.96598ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 config get cpus: exit status 14 (68.976898ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-448088 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-448088 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 15385: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-448088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-448088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (119.605953ms)

                                                
                                                
-- stdout --
	* [functional-448088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:37:03.166315   15326 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:37:03.166454   15326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:37:03.166467   15326 out.go:374] Setting ErrFile to fd 2...
	I1216 04:37:03.166476   15326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:37:03.166798   15326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 04:37:03.167376   15326 out.go:368] Setting JSON to false
	I1216 04:37:03.168188   15326 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1165,"bootTime":1765858658,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:37:03.168611   15326 start.go:143] virtualization: kvm guest
	I1216 04:37:03.170603   15326 out.go:179] * [functional-448088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:37:03.174350   15326 notify.go:221] Checking for updates...
	I1216 04:37:03.174398   15326 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:37:03.175783   15326 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:37:03.176743   15326 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 04:37:03.177976   15326 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:37:03.179222   15326 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:37:03.180537   15326 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:37:03.182024   15326 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:37:03.182613   15326 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:37:03.215451   15326 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 04:37:03.216959   15326 start.go:309] selected driver: kvm2
	I1216 04:37:03.216978   15326 start.go:927] validating driver "kvm2" against &{Name:functional-448088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-448088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:37:03.217100   15326 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:37:03.219554   15326 out.go:203] 
	W1216 04:37:03.220764   15326 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 04:37:03.221913   15326 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-448088 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-448088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-448088 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (108.819583ms)

                                                
                                                
-- stdout --
	* [functional-448088] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:37:03.052533   15310 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:37:03.052838   15310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:37:03.052847   15310 out.go:374] Setting ErrFile to fd 2...
	I1216 04:37:03.052852   15310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:37:03.053172   15310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 04:37:03.053651   15310 out.go:368] Setting JSON to false
	I1216 04:37:03.054467   15310 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1165,"bootTime":1765858658,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:37:03.054523   15310 start.go:143] virtualization: kvm guest
	I1216 04:37:03.056794   15310 out.go:179] * [functional-448088] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1216 04:37:03.058068   15310 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:37:03.058086   15310 notify.go:221] Checking for updates...
	I1216 04:37:03.060650   15310 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:37:03.061789   15310 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 04:37:03.062859   15310 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:37:03.063991   15310 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:37:03.065043   15310 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:37:03.066513   15310 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:37:03.067015   15310 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:37:03.098072   15310 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1216 04:37:03.099465   15310 start.go:309] selected driver: kvm2
	I1216 04:37:03.099483   15310 start.go:927] validating driver "kvm2" against &{Name:functional-448088 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-448088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:37:03.099561   15310 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:37:03.101476   15310 out.go:203] 
	W1216 04:37:03.102555   15310 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 04:37:03.103718   15310 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (24.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-448088 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-448088 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-6c5bm" [37ec0f28-3651-47d4-a1cd-91c73ac667a1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-6c5bm" [37ec0f28-3651-47d4-a1cd-91c73ac667a1] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 24.014505632s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.162:31902
functional_test.go:1680: http://192.168.39.162:31902: success! body:
Request served by hello-node-connect-7d85dfc575-6c5bm

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.162:31902
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (24.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [64b97708-705e-455b-b481-3c9fff30c2ae] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00628537s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-448088 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-448088 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-448088 get pvc myclaim -o=json
I1216 04:36:37.234023    8987 retry.go:31] will retry after 2.045450128s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2828e3a8-afff-49cb-88ef-c91184057d52 ResourceVersion:688 Generation:0 CreationTimestamp:2025-12-16 04:36:37 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0009f3b10 VolumeMode:0xc0009f3b20 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-448088 get pvc myclaim -o=json
I1216 04:36:39.354359    8987 retry.go:31] will retry after 4.377858229s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2828e3a8-afff-49cb-88ef-c91184057d52 ResourceVersion:688 Generation:0 CreationTimestamp:2025-12-16 04:36:37 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0019a50b0 VolumeMode:0xc0019a50c0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-448088 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-448088 apply -f testdata/storage-provisioner/pod.yaml
I1216 04:36:43.933159    8987 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [e8ab659e-f9cf-4803-ac33-0eec4c50703a] Pending
helpers_test.go:353: "sp-pod" [e8ab659e-f9cf-4803-ac33-0eec4c50703a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [e8ab659e-f9cf-4803-ac33-0eec4c50703a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.012389144s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-448088 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-448088 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-448088 delete -f testdata/storage-provisioner/pod.yaml: (1.233758202s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-448088 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f8679d03-0d8b-4aba-964e-f5f5e5b08e32] Pending
helpers_test.go:353: "sp-pod" [f8679d03-0d8b-4aba-964e-f5f5e5b08e32] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003847608s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-448088 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh -n functional-448088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 cp functional-448088:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3052828223/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh -n functional-448088 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh -n functional-448088 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-448088 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-d87vd" [40f7f909-7acb-4a2d-8e6b-cc8b42605325] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-d87vd" [40f7f909-7acb-4a2d-8e6b-cc8b42605325] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.050719645s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-448088 exec mysql-6bcdcbc558-d87vd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-448088 exec mysql-6bcdcbc558-d87vd -- mysql -ppassword -e "show databases;": exit status 1 (170.083047ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:36:54.130654    8987 retry.go:31] will retry after 713.50105ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-448088 exec mysql-6bcdcbc558-d87vd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-448088 exec mysql-6bcdcbc558-d87vd -- mysql -ppassword -e "show databases;": exit status 1 (148.526669ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:36:54.993570    8987 retry.go:31] will retry after 967.0291ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-448088 exec mysql-6bcdcbc558-d87vd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-448088 exec mysql-6bcdcbc558-d87vd -- mysql -ppassword -e "show databases;": exit status 1 (416.314298ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:36:56.377366    8987 retry.go:31] will retry after 2.262228268s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-448088 exec mysql-6bcdcbc558-d87vd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-448088 exec mysql-6bcdcbc558-d87vd -- mysql -ppassword -e "show databases;": exit status 1 (118.789804ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:36:58.759167    8987 retry.go:31] will retry after 3.440367828s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-448088 exec mysql-6bcdcbc558-d87vd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8987/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo cat /etc/test/nested/copy/8987/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8987.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo cat /etc/ssl/certs/8987.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8987.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo cat /usr/share/ca-certificates/8987.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/89872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo cat /etc/ssl/certs/89872.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/89872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo cat /usr/share/ca-certificates/89872.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-448088 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 ssh "sudo systemctl is-active docker": exit status 1 (168.410397ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 ssh "sudo systemctl is-active containerd": exit status 1 (187.36931ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-448088 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-448088
localhost/kicbase/echo-server:functional-448088
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-448088 image ls --format short --alsologtostderr:
I1216 04:37:09.433011   15710 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:09.433104   15710 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:09.433111   15710 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:09.433116   15710 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:09.433393   15710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:37:09.433995   15710 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:09.434145   15710 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:09.436833   15710 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:09.439890   15710 main.go:143] libmachine: domain functional-448088 has defined MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:09.440377   15710 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:38:1d:03", ip: ""} in network mk-functional-448088: {Iface:virbr1 ExpiryTime:2025-12-16 05:34:07 +0000 UTC Type:0 Mac:52:54:00:38:1d:03 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-448088 Clientid:01:52:54:00:38:1d:03}
I1216 04:37:09.440413   15710 main.go:143] libmachine: domain functional-448088 has defined IP address 192.168.39.162 and MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:09.440622   15710 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-448088/id_rsa Username:docker}
I1216 04:37:09.541204   15710 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-448088 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ localhost/minikube-local-cache-test     │ functional-448088  │ 24e33352bbfcc │ 3.33kB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-448088  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-448088 image ls --format table --alsologtostderr:
I1216 04:37:10.207042   15787 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:10.207150   15787 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:10.207159   15787 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:10.207163   15787 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:10.207359   15787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:37:10.207906   15787 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:10.207991   15787 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:10.210111   15787 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:10.212661   15787 main.go:143] libmachine: domain functional-448088 has defined MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:10.213090   15787 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:38:1d:03", ip: ""} in network mk-functional-448088: {Iface:virbr1 ExpiryTime:2025-12-16 05:34:07 +0000 UTC Type:0 Mac:52:54:00:38:1d:03 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-448088 Clientid:01:52:54:00:38:1d:03}
I1216 04:37:10.213130   15787 main.go:143] libmachine: domain functional-448088 has defined IP address 192.168.39.162 and MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:10.213344   15787 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-448088/id_rsa Username:docker}
I1216 04:37:10.355204   15787 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-448088 image ls --format json --alsologtostderr:
[{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"r
epoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size
":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"da86e6ba6ca197b
f6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"01e8bacf0f50095b9b12daf485979dbcb45
4e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b939
3d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-448088"],"size":"4943877"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"24e33352bbfcc56626c5086c831b2a574b88a2b4859f249bec89f32bb534e58f","repoDigests":["localhost/minikube-local-cache-test@sha256:
14f241e86b415d057bf0b218bc053effb9da8fa7be3ed24e4a8a142d801a8a1f"],"repoTags":["localhost/minikube-local-cache-test:functional-448088"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-448088 image ls --format json --alsologtostderr:
I1216 04:37:09.970841   15767 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:09.971123   15767 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:09.971133   15767 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:09.971137   15767 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:09.971347   15767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:37:09.971937   15767 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:09.972043   15767 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:09.974519   15767 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:09.976995   15767 main.go:143] libmachine: domain functional-448088 has defined MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:09.977490   15767 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:38:1d:03", ip: ""} in network mk-functional-448088: {Iface:virbr1 ExpiryTime:2025-12-16 05:34:07 +0000 UTC Type:0 Mac:52:54:00:38:1d:03 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-448088 Clientid:01:52:54:00:38:1d:03}
I1216 04:37:09.977519   15767 main.go:143] libmachine: domain functional-448088 has defined IP address 192.168.39.162 and MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:09.977715   15767 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-448088/id_rsa Username:docker}
I1216 04:37:10.091474   15767 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-448088 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-448088
size: "4943877"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 24e33352bbfcc56626c5086c831b2a574b88a2b4859f249bec89f32bb534e58f
repoDigests:
- localhost/minikube-local-cache-test@sha256:14f241e86b415d057bf0b218bc053effb9da8fa7be3ed24e4a8a142d801a8a1f
repoTags:
- localhost/minikube-local-cache-test:functional-448088
size: "3330"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-448088 image ls --format yaml --alsologtostderr:
I1216 04:37:09.674522   15746 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:09.674827   15746 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:09.674841   15746 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:09.674847   15746 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:09.675096   15746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:37:09.675629   15746 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:09.675741   15746 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:09.678131   15746 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:09.681038   15746 main.go:143] libmachine: domain functional-448088 has defined MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:09.681561   15746 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:38:1d:03", ip: ""} in network mk-functional-448088: {Iface:virbr1 ExpiryTime:2025-12-16 05:34:07 +0000 UTC Type:0 Mac:52:54:00:38:1d:03 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-448088 Clientid:01:52:54:00:38:1d:03}
I1216 04:37:09.681591   15746 main.go:143] libmachine: domain functional-448088 has defined IP address 192.168.39.162 and MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:09.681848   15746 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-448088/id_rsa Username:docker}
I1216 04:37:09.794412   15746 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 ssh pgrep buildkitd: exit status 1 (207.590917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image build -t localhost/my-image:functional-448088 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 image build -t localhost/my-image:functional-448088 testdata/build --alsologtostderr: (5.33383927s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-448088 image build -t localhost/my-image:functional-448088 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 62a448f8f57
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-448088
--> bf62bccf255
Successfully tagged localhost/my-image:functional-448088
bf62bccf255870be20b4df395b6bc7cb294d0df460d4d9701afa7ae30cddd003
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-448088 image build -t localhost/my-image:functional-448088 testdata/build --alsologtostderr:
I1216 04:37:10.035414   15777 out.go:360] Setting OutFile to fd 1 ...
I1216 04:37:10.035721   15777 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:10.035732   15777 out.go:374] Setting ErrFile to fd 2...
I1216 04:37:10.035738   15777 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:37:10.035948   15777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:37:10.036630   15777 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:10.037390   15777 config.go:182] Loaded profile config "functional-448088": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 04:37:10.039816   15777 ssh_runner.go:195] Run: systemctl --version
I1216 04:37:10.042498   15777 main.go:143] libmachine: domain functional-448088 has defined MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:10.042956   15777 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:38:1d:03", ip: ""} in network mk-functional-448088: {Iface:virbr1 ExpiryTime:2025-12-16 05:34:07 +0000 UTC Type:0 Mac:52:54:00:38:1d:03 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:functional-448088 Clientid:01:52:54:00:38:1d:03}
I1216 04:37:10.042993   15777 main.go:143] libmachine: domain functional-448088 has defined IP address 192.168.39.162 and MAC address 52:54:00:38:1d:03 in network mk-functional-448088
I1216 04:37:10.043223   15777 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-448088/id_rsa Username:docker}
I1216 04:37:10.146749   15777 build_images.go:162] Building image from path: /tmp/build.1880734404.tar
I1216 04:37:10.146855   15777 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 04:37:10.173257   15777 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1880734404.tar
I1216 04:37:10.184512   15777 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1880734404.tar: stat -c "%s %y" /var/lib/minikube/build/build.1880734404.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1880734404.tar': No such file or directory
I1216 04:37:10.184548   15777 ssh_runner.go:362] scp /tmp/build.1880734404.tar --> /var/lib/minikube/build/build.1880734404.tar (3072 bytes)
I1216 04:37:10.270938   15777 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1880734404
I1216 04:37:10.290249   15777 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1880734404 -xf /var/lib/minikube/build/build.1880734404.tar
I1216 04:37:10.315298   15777 crio.go:315] Building image: /var/lib/minikube/build/build.1880734404
I1216 04:37:10.315376   15777 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-448088 /var/lib/minikube/build/build.1880734404 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 04:37:15.267190   15777 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-448088 /var/lib/minikube/build/build.1880734404 --cgroup-manager=cgroupfs: (4.951783762s)
I1216 04:37:15.267252   15777 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1880734404
I1216 04:37:15.284957   15777 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1880734404.tar
I1216 04:37:15.298299   15777 build_images.go:218] Built localhost/my-image:functional-448088 from /tmp/build.1880734404.tar
I1216 04:37:15.298348   15777 build_images.go:134] succeeded building to: functional-448088
I1216 04:37:15.298355   15777 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls
2025/12/16 04:37:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.013688754s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-448088
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image load --daemon kicbase/echo-server:functional-448088 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 image load --daemon kicbase/echo-server:functional-448088 --alsologtostderr: (1.092172764s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image load --daemon kicbase/echo-server:functional-448088 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-448088
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image load --daemon kicbase/echo-server:functional-448088 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image save kicbase/echo-server:functional-448088 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 image save kicbase/echo-server:functional-448088 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (10.882104701s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image rm kicbase/echo-server:functional-448088 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-448088
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 image save --daemon kicbase/echo-server:functional-448088 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-448088
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-448088 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-448088 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-cffcq" [5de55dc4-cc1d-4700-918a-225320f7434f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-cffcq" [5de55dc4-cc1d-4700-918a-225320f7434f] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.005423777s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "356.527726ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "71.165047ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "394.924531ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "60.861865ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdany-port2939634248/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765859818125900214" to /tmp/TestFunctionalparallelMountCmdany-port2939634248/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765859818125900214" to /tmp/TestFunctionalparallelMountCmdany-port2939634248/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765859818125900214" to /tmp/TestFunctionalparallelMountCmdany-port2939634248/001/test-1765859818125900214
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (191.826684ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:36:58.318046    8987 retry.go:31] will retry after 476.89464ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 04:36 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 04:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 04:36 test-1765859818125900214
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh cat /mount-9p/test-1765859818125900214
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-448088 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [d9d0250b-cdce-48cc-bedc-7fc5e1c52406] Pending
helpers_test.go:353: "busybox-mount" [d9d0250b-cdce-48cc-bedc-7fc5e1c52406] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [d9d0250b-cdce-48cc-bedc-7fc5e1c52406] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [d9d0250b-cdce-48cc-bedc-7fc5e1c52406] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003514472s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-448088 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdany-port2939634248/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdspecific-port3973909495/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.18889ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:37:06.268626    8987 retry.go:31] will retry after 373.427181ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdspecific-port3973909495/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 ssh "sudo umount -f /mount-9p": exit status 1 (178.599834ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-448088 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdspecific-port3973909495/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 service list: (1.291197488s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220333940/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220333940/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220333940/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T" /mount1
I1216 04:37:07.447712    8987 detect.go:223] nested VM detected
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T" /mount1: exit status 1 (184.638893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:37:07.567665    8987 retry.go:31] will retry after 255.788066ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-448088 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220333940/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220333940/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-448088 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2220333940/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-448088 service list -o json: (1.240581868s)
functional_test.go:1504: Took "1.240698026s" to run "out/minikube-linux-amd64 -p functional-448088 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.162:31998
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-448088 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.162:31998
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-448088
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-448088
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-448088
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22141-5059/.minikube/files/etc/test/nested/copy/8987/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (74.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431901 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1216 04:38:27.159692    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-431901 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m14.231662372s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (74.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (38.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1216 04:38:32.511657    8987 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431901 --alsologtostderr -v=8
E1216 04:38:54.864204    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-431901 --alsologtostderr -v=8: (38.881324932s)
functional_test.go:678: soft start took 38.881689985s for "functional-431901" cluster.
I1216 04:39:11.393330    8987 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (38.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-431901 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-431901 cache add registry.k8s.io/pause:3.1: (1.018036909s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-431901 cache add registry.k8s.io/pause:3.3: (1.032489371s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-431901 cache add registry.k8s.io/pause:latest: (1.091338468s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach112735640/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 cache add minikube-local-cache-test:functional-431901
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-431901 cache add minikube-local-cache-test:functional-431901: (1.904912058s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 cache delete minikube-local-cache-test:functional-431901
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-431901
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.256433ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 kubectl -- --context functional-431901 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-431901 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (35.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431901 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-431901 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.541429252s)
functional_test.go:776: restart took 35.541538911s for "functional-431901" cluster.
I1216 04:39:54.561652    8987 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (35.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-431901 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-431901 logs: (1.24889088s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2628780839/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-431901 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2628780839/001/logs.txt: (1.286172366s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-431901 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-431901
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-431901: exit status 115 (223.53731ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.135:31105 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-431901 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 config get cpus: exit status 14 (62.711814ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 config get cpus: exit status 14 (59.087525ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (39.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-431901 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-431901 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 18162: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (39.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431901 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-431901 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (133.043667ms)

                                                
                                                
-- stdout --
	* [functional-431901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:40:14.968214   18004 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:40:14.968602   18004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:40:14.968676   18004 out.go:374] Setting ErrFile to fd 2...
	I1216 04:40:14.968692   18004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:40:14.969186   18004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 04:40:14.970150   18004 out.go:368] Setting JSON to false
	I1216 04:40:14.970986   18004 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1357,"bootTime":1765858658,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:40:14.971078   18004 start.go:143] virtualization: kvm guest
	I1216 04:40:14.972881   18004 out.go:179] * [functional-431901] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 04:40:14.974280   18004 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:40:14.974290   18004 notify.go:221] Checking for updates...
	I1216 04:40:14.975359   18004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:40:14.977498   18004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 04:40:14.980964   18004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:40:14.982164   18004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:40:14.983359   18004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:40:14.984961   18004 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:40:14.985548   18004 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:40:15.017375   18004 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 04:40:15.018499   18004 start.go:309] selected driver: kvm2
	I1216 04:40:15.018511   18004 start.go:927] validating driver "kvm2" against &{Name:functional-431901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-431901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:40:15.018607   18004 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:40:15.020790   18004 out.go:203] 
	W1216 04:40:15.021874   18004 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 04:40:15.022984   18004 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431901 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431901 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-431901 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (118.718115ms)

                                                
                                                
-- stdout --
	* [functional-431901] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:40:15.209661   18034 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:40:15.209829   18034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:40:15.209842   18034 out.go:374] Setting ErrFile to fd 2...
	I1216 04:40:15.209850   18034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:40:15.210135   18034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 04:40:15.210541   18034 out.go:368] Setting JSON to false
	I1216 04:40:15.211369   18034 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1357,"bootTime":1765858658,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 04:40:15.211440   18034 start.go:143] virtualization: kvm guest
	I1216 04:40:15.213454   18034 out.go:179] * [functional-431901] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1216 04:40:15.214714   18034 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 04:40:15.214710   18034 notify.go:221] Checking for updates...
	I1216 04:40:15.216972   18034 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 04:40:15.218065   18034 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 04:40:15.219222   18034 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 04:40:15.220364   18034 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 04:40:15.221517   18034 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 04:40:15.223257   18034 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 04:40:15.223927   18034 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 04:40:15.258180   18034 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1216 04:40:15.259279   18034 start.go:309] selected driver: kvm2
	I1216 04:40:15.259296   18034 start.go:927] validating driver "kvm2" against &{Name:functional-431901 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-431901 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 04:40:15.259432   18034 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 04:40:15.261595   18034 out.go:203] 
	W1216 04:40:15.262719   18034 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 04:40:15.263876   18034 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (11.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-431901 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-431901 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-xl65l" [752ef9f5-4dbb-4b9e-b5fa-b65e7b8bec86] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-xl65l" [752ef9f5-4dbb-4b9e-b5fa-b65e7b8bec86] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004385828s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.135:31325
functional_test.go:1680: http://192.168.39.135:31325: success! body:
Request served by hello-node-connect-9f67c86d4-xl65l

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.135:31325
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (11.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (41.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [f648efe8-b995-4103-86e0-7c5927efc557] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004086089s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-431901 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-431901 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-431901 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-431901 apply -f testdata/storage-provisioner/pod.yaml
I1216 04:40:08.252565    8987 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [31e2c924-89cb-48ec-b342-88ec1d574527] Pending
helpers_test.go:353: "sp-pod" [31e2c924-89cb-48ec-b342-88ec1d574527] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [31e2c924-89cb-48ec-b342-88ec1d574527] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.006721827s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-431901 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-431901 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-431901 delete -f testdata/storage-provisioner/pod.yaml: (2.690516431s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-431901 apply -f testdata/storage-provisioner/pod.yaml
I1216 04:40:30.250612    8987 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [28f67d48-d991-4d74-b35c-c3f305480ab7] Pending
helpers_test.go:353: "sp-pod" [28f67d48-d991-4d74-b35c-c3f305480ab7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [28f67d48-d991-4d74-b35c-c3f305480ab7] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.009019655s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-431901 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (41.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh -n functional-431901 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 cp functional-431901:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1652530237/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh -n functional-431901 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh -n functional-431901 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (33.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-431901 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-qd42x" [f8b739bf-ae0d-45ee-aa57-34ea96097848] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-qd42x" [f8b739bf-ae0d-45ee-aa57-34ea96097848] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 27.008106638s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-431901 exec mysql-7d7b65bc95-qd42x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-431901 exec mysql-7d7b65bc95-qd42x -- mysql -ppassword -e "show databases;": exit status 1 (297.587053ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:40:42.767865    8987 retry.go:31] will retry after 872.158316ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-431901 exec mysql-7d7b65bc95-qd42x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-431901 exec mysql-7d7b65bc95-qd42x -- mysql -ppassword -e "show databases;": exit status 1 (441.743148ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:40:44.082275    8987 retry.go:31] will retry after 1.247409949s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-431901 exec mysql-7d7b65bc95-qd42x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-431901 exec mysql-7d7b65bc95-qd42x -- mysql -ppassword -e "show databases;": exit status 1 (182.042061ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 04:40:45.512914    8987 retry.go:31] will retry after 3.125497396s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-431901 exec mysql-7d7b65bc95-qd42x -- mysql -ppassword -e "show databases;"
2025/12/16 04:40:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (33.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8987/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo cat /etc/test/nested/copy/8987/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8987.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo cat /etc/ssl/certs/8987.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8987.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo cat /usr/share/ca-certificates/8987.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/89872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo cat /etc/ssl/certs/89872.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/89872.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo cat /usr/share/ca-certificates/89872.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-431901 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 ssh "sudo systemctl is-active docker": exit status 1 (159.232241ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 ssh "sudo systemctl is-active containerd": exit status 1 (164.073954ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-431901 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-431901 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-mrvct" [62bce830-f17a-4b0b-a58d-8ba66c1bd7b4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-mrvct" [62bce830-f17a-4b0b-a58d-8ba66c1bd7b4] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004028504s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "238.863274ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.588493ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "233.900966ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.963784ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3787044562/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765860004261908394" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3787044562/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765860004261908394" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3787044562/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765860004261908394" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3787044562/001/test-1765860004261908394
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.159818ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:40:04.419361    8987 retry.go:31] will retry after 489.834128ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 04:40 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 04:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 04:40 test-1765860004261908394
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh cat /mount-9p/test-1765860004261908394
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-431901 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [ee01a98e-412f-45e8-a08b-b5f7747ee561] Pending
helpers_test.go:353: "busybox-mount" [ee01a98e-412f-45e8-a08b-b5f7747ee561] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [ee01a98e-412f-45e8-a08b-b5f7747ee561] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [ee01a98e-412f-45e8-a08b-b5f7747ee561] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.008450529s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-431901 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3787044562/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 service list -o json
functional_test.go:1504: Took "214.407761ms" to run "out/minikube-linux-amd64 -p functional-431901 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.135:32515
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.135:32515
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431901 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-431901
localhost/kicbase/echo-server:functional-431901
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431901 image ls --format short --alsologtostderr:
I1216 04:40:21.692909   18338 out.go:360] Setting OutFile to fd 1 ...
I1216 04:40:21.693180   18338 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:21.693190   18338 out.go:374] Setting ErrFile to fd 2...
I1216 04:40:21.693196   18338 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:21.693399   18338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:40:21.693986   18338 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:21.694100   18338 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:21.696150   18338 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:21.698553   18338 main.go:143] libmachine: domain functional-431901 has defined MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:21.699002   18338 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:ff:4d", ip: ""} in network mk-functional-431901: {Iface:virbr1 ExpiryTime:2025-12-16 05:37:33 +0000 UTC Type:0 Mac:52:54:00:d1:ff:4d Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:functional-431901 Clientid:01:52:54:00:d1:ff:4d}
I1216 04:40:21.699032   18338 main.go:143] libmachine: domain functional-431901 has defined IP address 192.168.39.135 and MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:21.699195   18338 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-431901/id_rsa Username:docker}
I1216 04:40:21.781952   18338 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431901 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ localhost/minikube-local-cache-test     │ functional-431901  │ 24e33352bbfcc │ 3.33kB │
│ localhost/my-image                      │ functional-431901  │ 9f891beda2fdb │ 1.47MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-431901  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431901 image ls --format table --alsologtostderr:
I1216 04:40:37.386385   18499 out.go:360] Setting OutFile to fd 1 ...
I1216 04:40:37.386513   18499 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:37.386522   18499 out.go:374] Setting ErrFile to fd 2...
I1216 04:40:37.386528   18499 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:37.386716   18499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:40:37.387271   18499 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:37.387362   18499 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:37.389405   18499 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:37.391557   18499 main.go:143] libmachine: domain functional-431901 has defined MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:37.391951   18499 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:ff:4d", ip: ""} in network mk-functional-431901: {Iface:virbr1 ExpiryTime:2025-12-16 05:37:33 +0000 UTC Type:0 Mac:52:54:00:d1:ff:4d Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:functional-431901 Clientid:01:52:54:00:d1:ff:4d}
I1216 04:40:37.391975   18499 main.go:143] libmachine: domain functional-431901 has defined IP address 192.168.39.135 and MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:37.392139   18499 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-431901/id_rsa Username:docker}
I1216 04:40:37.478075   18499 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431901 image ls --format json --alsologtostderr:
[{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["do
cker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/core
dns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1d
db9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-431901"],"size":"4944818"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-min
ikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-con
troller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"1937ec45bc78583836bb619050fd9fc1b3f1856edbb8ff1837f25fad8aebff89","repoDigests":["docker.io/library/1d684e9934a7a8915a4fcab17072ff64a2a001554238b787c53726adc783c93f-tmp@sha256:e1483c1f0a8f681c93a436c81d558476270e38f9d273fb9c3c3c4e099c2d28b1"],"repoTags":[],"size":"1466018"},{"id":"24e33352bbfcc56626c5086c831b2a574b88a2b4859f249bec89f32bb534e58f","repoDigests":["localhost/minikube-local-cache-test@sha256:14f241e86b415d057bf0b218bc053effb9da8fa7be3ed24e4a8a142d801a8a1f"],"repoTags":["localhost/minikube-local-cache-test:functional-431901"],"size":"3330"},{"id":"9f891beda2fdbc2eb4de73bff15a15b9df422967112797b5d2ff54e21c973b46","repoDigests":["localhost/my-image@sha256:2d9dee6824d4d2897bd5498662f58a1be8b644b7118708
207dc722b04ea2feb8"],"repoTags":["localhost/my-image:functional-431901"],"size":"1468599"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.i
o/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431901 image ls --format json --alsologtostderr:
I1216 04:40:37.110087   18488 out.go:360] Setting OutFile to fd 1 ...
I1216 04:40:37.110414   18488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:37.110426   18488 out.go:374] Setting ErrFile to fd 2...
I1216 04:40:37.110431   18488 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:37.110643   18488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:40:37.111390   18488 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:37.111533   18488 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:37.113734   18488 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:37.115750   18488 main.go:143] libmachine: domain functional-431901 has defined MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:37.116108   18488 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:ff:4d", ip: ""} in network mk-functional-431901: {Iface:virbr1 ExpiryTime:2025-12-16 05:37:33 +0000 UTC Type:0 Mac:52:54:00:d1:ff:4d Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:functional-431901 Clientid:01:52:54:00:d1:ff:4d}
I1216 04:40:37.116132   18488 main.go:143] libmachine: domain functional-431901 has defined IP address 192.168.39.135 and MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:37.116246   18488 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-431901/id_rsa Username:docker}
I1216 04:40:37.208135   18488 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431901 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 24e33352bbfcc56626c5086c831b2a574b88a2b4859f249bec89f32bb534e58f
repoDigests:
- localhost/minikube-local-cache-test@sha256:14f241e86b415d057bf0b218bc053effb9da8fa7be3ed24e4a8a142d801a8a1f
repoTags:
- localhost/minikube-local-cache-test:functional-431901
size: "3330"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-431901
size: "4944818"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431901 image ls --format yaml --alsologtostderr:
I1216 04:40:21.880572   18349 out.go:360] Setting OutFile to fd 1 ...
I1216 04:40:21.880860   18349 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:21.880871   18349 out.go:374] Setting ErrFile to fd 2...
I1216 04:40:21.880874   18349 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:21.881135   18349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:40:21.881731   18349 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:21.881829   18349 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:21.883767   18349 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:21.885627   18349 main.go:143] libmachine: domain functional-431901 has defined MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:21.886002   18349 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:ff:4d", ip: ""} in network mk-functional-431901: {Iface:virbr1 ExpiryTime:2025-12-16 05:37:33 +0000 UTC Type:0 Mac:52:54:00:d1:ff:4d Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:functional-431901 Clientid:01:52:54:00:d1:ff:4d}
I1216 04:40:21.886025   18349 main.go:143] libmachine: domain functional-431901 has defined IP address 192.168.39.135 and MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:21.886155   18349 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-431901/id_rsa Username:docker}
I1216 04:40:21.968410   18349 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (15.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 ssh pgrep buildkitd: exit status 1 (167.633724ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image build -t localhost/my-image:functional-431901 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-431901 image build -t localhost/my-image:functional-431901 testdata/build --alsologtostderr: (14.65063082s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431901 image build -t localhost/my-image:functional-431901 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1937ec45bc7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-431901
--> 9f891beda2f
Successfully tagged localhost/my-image:functional-431901
9f891beda2fdbc2eb4de73bff15a15b9df422967112797b5d2ff54e21c973b46
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431901 image build -t localhost/my-image:functional-431901 testdata/build --alsologtostderr:
I1216 04:40:22.248491   18371 out.go:360] Setting OutFile to fd 1 ...
I1216 04:40:22.248788   18371 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:22.248800   18371 out.go:374] Setting ErrFile to fd 2...
I1216 04:40:22.248806   18371 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 04:40:22.249004   18371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
I1216 04:40:22.249689   18371 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:22.250369   18371 config.go:182] Loaded profile config "functional-431901": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 04:40:22.252945   18371 ssh_runner.go:195] Run: systemctl --version
I1216 04:40:22.256031   18371 main.go:143] libmachine: domain functional-431901 has defined MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:22.256471   18371 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:ff:4d", ip: ""} in network mk-functional-431901: {Iface:virbr1 ExpiryTime:2025-12-16 05:37:33 +0000 UTC Type:0 Mac:52:54:00:d1:ff:4d Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:functional-431901 Clientid:01:52:54:00:d1:ff:4d}
I1216 04:40:22.256506   18371 main.go:143] libmachine: domain functional-431901 has defined IP address 192.168.39.135 and MAC address 52:54:00:d1:ff:4d in network mk-functional-431901
I1216 04:40:22.256696   18371 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/functional-431901/id_rsa Username:docker}
I1216 04:40:22.370909   18371 build_images.go:162] Building image from path: /tmp/build.310575420.tar
I1216 04:40:22.370978   18371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 04:40:22.405422   18371 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.310575420.tar
I1216 04:40:22.416185   18371 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.310575420.tar: stat -c "%s %y" /var/lib/minikube/build/build.310575420.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.310575420.tar': No such file or directory
I1216 04:40:22.416224   18371 ssh_runner.go:362] scp /tmp/build.310575420.tar --> /var/lib/minikube/build/build.310575420.tar (3072 bytes)
I1216 04:40:22.480837   18371 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.310575420
I1216 04:40:22.508045   18371 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.310575420 -xf /var/lib/minikube/build/build.310575420.tar
I1216 04:40:22.531633   18371 crio.go:315] Building image: /var/lib/minikube/build/build.310575420
I1216 04:40:22.531742   18371 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-431901 /var/lib/minikube/build/build.310575420 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 04:40:36.801068   18371 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-431901 /var/lib/minikube/build/build.310575420 --cgroup-manager=cgroupfs: (14.269289133s)
I1216 04:40:36.801162   18371 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.310575420
I1216 04:40:36.818275   18371 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.310575420.tar
I1216 04:40:36.829902   18371 build_images.go:218] Built localhost/my-image:functional-431901 from /tmp/build.310575420.tar
I1216 04:40:36.829932   18371 build_images.go:134] succeeded building to: functional-431901
I1216 04:40:36.829936   18371 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (15.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-431901
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3878140614/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (166.173045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:40:13.355532    8987 retry.go:31] will retry after 679.243783ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3878140614/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 ssh "sudo umount -f /mount-9p": exit status 1 (179.35973ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-431901 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3878140614/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image load --daemon kicbase/echo-server:functional-431901 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-431901 image load --daemon kicbase/echo-server:functional-431901 --alsologtostderr: (1.1101908s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2012694836/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2012694836/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2012694836/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T" /mount1: exit status 1 (197.38685ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 04:40:15.005896    8987 retry.go:31] will retry after 352.72275ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-431901 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2012694836/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2012694836/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431901 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2012694836/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image load --daemon kicbase/echo-server:functional-431901 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-431901
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image load --daemon kicbase/echo-server:functional-431901 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image save kicbase/echo-server:functional-431901 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-431901 image save kicbase/echo-server:functional-431901 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.378391228s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image rm kicbase/echo-server:functional-431901 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-431901
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-431901 image save --daemon kicbase/echo-server:functional-431901 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-431901
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-431901
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-431901
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-431901
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (191.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1216 04:41:30.911936    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:30.918299    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:30.929677    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:30.951010    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:30.992417    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:31.073879    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:31.235475    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:31.557182    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:32.199210    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:33.480884    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:36.042274    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:41.164212    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:41:51.406031    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:11.887385    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:42:52.849718    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:43:27.163978    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m11.38047659s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (191.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 kubectl -- rollout status deployment/busybox: (4.808592597s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-fr6rk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-hr9zv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-v67hf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-fr6rk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-hr9zv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-v67hf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-fr6rk -- nslookup kubernetes.default.svc.cluster.local
E1216 04:44:14.771600    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-hr9zv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-v67hf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-fr6rk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-fr6rk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-hr9zv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-hr9zv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-v67hf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 kubectl -- exec busybox-7b57f96db7-v67hf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 node add --alsologtostderr -v 5: (45.005395596s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5
E1216 04:45:01.705760    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:45:01.712151    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:45:01.723586    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:45:01.744976    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:45:01.786880    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:45:01.868640    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:45:02.030252    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-607292 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
E1216 04:45:02.352827    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1216 04:45:02.994151    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp testdata/cp-test.txt ha-607292:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4159347006/001/cp-test_ha-607292.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292 "sudo cat /home/docker/cp-test.txt"
E1216 04:45:04.275625    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292:/home/docker/cp-test.txt ha-607292-m02:/home/docker/cp-test_ha-607292_ha-607292-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m02 "sudo cat /home/docker/cp-test_ha-607292_ha-607292-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292:/home/docker/cp-test.txt ha-607292-m03:/home/docker/cp-test_ha-607292_ha-607292-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m03 "sudo cat /home/docker/cp-test_ha-607292_ha-607292-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292:/home/docker/cp-test.txt ha-607292-m04:/home/docker/cp-test_ha-607292_ha-607292-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m04 "sudo cat /home/docker/cp-test_ha-607292_ha-607292-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp testdata/cp-test.txt ha-607292-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4159347006/001/cp-test_ha-607292-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m02 "sudo cat /home/docker/cp-test.txt"
E1216 04:45:06.837197    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m02:/home/docker/cp-test.txt ha-607292:/home/docker/cp-test_ha-607292-m02_ha-607292.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292 "sudo cat /home/docker/cp-test_ha-607292-m02_ha-607292.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m02:/home/docker/cp-test.txt ha-607292-m03:/home/docker/cp-test_ha-607292-m02_ha-607292-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m03 "sudo cat /home/docker/cp-test_ha-607292-m02_ha-607292-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m02:/home/docker/cp-test.txt ha-607292-m04:/home/docker/cp-test_ha-607292-m02_ha-607292-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m04 "sudo cat /home/docker/cp-test_ha-607292-m02_ha-607292-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp testdata/cp-test.txt ha-607292-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4159347006/001/cp-test_ha-607292-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m03:/home/docker/cp-test.txt ha-607292:/home/docker/cp-test_ha-607292-m03_ha-607292.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292 "sudo cat /home/docker/cp-test_ha-607292-m03_ha-607292.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m03:/home/docker/cp-test.txt ha-607292-m02:/home/docker/cp-test_ha-607292-m03_ha-607292-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m02 "sudo cat /home/docker/cp-test_ha-607292-m03_ha-607292-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m03:/home/docker/cp-test.txt ha-607292-m04:/home/docker/cp-test_ha-607292-m03_ha-607292-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m04 "sudo cat /home/docker/cp-test_ha-607292-m03_ha-607292-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp testdata/cp-test.txt ha-607292-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4159347006/001/cp-test_ha-607292-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m04:/home/docker/cp-test.txt ha-607292:/home/docker/cp-test_ha-607292-m04_ha-607292.txt
E1216 04:45:11.959557    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292 "sudo cat /home/docker/cp-test_ha-607292-m04_ha-607292.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m04:/home/docker/cp-test.txt ha-607292-m02:/home/docker/cp-test_ha-607292-m04_ha-607292-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m02 "sudo cat /home/docker/cp-test_ha-607292-m04_ha-607292-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 cp ha-607292-m04:/home/docker/cp-test.txt ha-607292-m03:/home/docker/cp-test_ha-607292-m04_ha-607292-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 ssh -n ha-607292-m03 "sudo cat /home/docker/cp-test_ha-607292-m04_ha-607292-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (87.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 node stop m02 --alsologtostderr -v 5
E1216 04:45:22.201961    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:45:42.684026    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:46:23.646999    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:46:30.909648    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 node stop m02 --alsologtostderr -v 5: (1m27.437739839s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5: exit status 7 (503.355788ms)

                                                
                                                
-- stdout --
	ha-607292
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-607292-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-607292-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-607292-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:46:41.163491   21562 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:46:41.163730   21562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:46:41.163738   21562 out.go:374] Setting ErrFile to fd 2...
	I1216 04:46:41.163742   21562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:46:41.163929   21562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 04:46:41.164088   21562 out.go:368] Setting JSON to false
	I1216 04:46:41.164113   21562 mustload.go:66] Loading cluster: ha-607292
	I1216 04:46:41.164174   21562 notify.go:221] Checking for updates...
	I1216 04:46:41.164493   21562 config.go:182] Loaded profile config "ha-607292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:46:41.164517   21562 status.go:174] checking status of ha-607292 ...
	I1216 04:46:41.166606   21562 status.go:371] ha-607292 host status = "Running" (err=<nil>)
	I1216 04:46:41.166626   21562 host.go:66] Checking if "ha-607292" exists ...
	I1216 04:46:41.169143   21562 main.go:143] libmachine: domain ha-607292 has defined MAC address 52:54:00:d7:05:a8 in network mk-ha-607292
	I1216 04:46:41.169591   21562 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d7:05:a8", ip: ""} in network mk-ha-607292: {Iface:virbr1 ExpiryTime:2025-12-16 05:41:11 +0000 UTC Type:0 Mac:52:54:00:d7:05:a8 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-607292 Clientid:01:52:54:00:d7:05:a8}
	I1216 04:46:41.169637   21562 main.go:143] libmachine: domain ha-607292 has defined IP address 192.168.39.6 and MAC address 52:54:00:d7:05:a8 in network mk-ha-607292
	I1216 04:46:41.169790   21562 host.go:66] Checking if "ha-607292" exists ...
	I1216 04:46:41.169980   21562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:46:41.172040   21562 main.go:143] libmachine: domain ha-607292 has defined MAC address 52:54:00:d7:05:a8 in network mk-ha-607292
	I1216 04:46:41.172482   21562 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d7:05:a8", ip: ""} in network mk-ha-607292: {Iface:virbr1 ExpiryTime:2025-12-16 05:41:11 +0000 UTC Type:0 Mac:52:54:00:d7:05:a8 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:ha-607292 Clientid:01:52:54:00:d7:05:a8}
	I1216 04:46:41.172504   21562 main.go:143] libmachine: domain ha-607292 has defined IP address 192.168.39.6 and MAC address 52:54:00:d7:05:a8 in network mk-ha-607292
	I1216 04:46:41.172658   21562 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/ha-607292/id_rsa Username:docker}
	I1216 04:46:41.263387   21562 ssh_runner.go:195] Run: systemctl --version
	I1216 04:46:41.270415   21562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:46:41.289918   21562 kubeconfig.go:125] found "ha-607292" server: "https://192.168.39.254:8443"
	I1216 04:46:41.289961   21562 api_server.go:166] Checking apiserver status ...
	I1216 04:46:41.289992   21562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:46:41.311636   21562 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	W1216 04:46:41.323874   21562 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:46:41.323946   21562 ssh_runner.go:195] Run: ls
	I1216 04:46:41.328896   21562 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1216 04:46:41.334141   21562 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1216 04:46:41.334160   21562 status.go:463] ha-607292 apiserver status = Running (err=<nil>)
	I1216 04:46:41.334168   21562 status.go:176] ha-607292 status: &{Name:ha-607292 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:46:41.334183   21562 status.go:174] checking status of ha-607292-m02 ...
	I1216 04:46:41.335867   21562 status.go:371] ha-607292-m02 host status = "Stopped" (err=<nil>)
	I1216 04:46:41.335889   21562 status.go:384] host is not running, skipping remaining checks
	I1216 04:46:41.335898   21562 status.go:176] ha-607292-m02 status: &{Name:ha-607292-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:46:41.335917   21562 status.go:174] checking status of ha-607292-m03 ...
	I1216 04:46:41.337222   21562 status.go:371] ha-607292-m03 host status = "Running" (err=<nil>)
	I1216 04:46:41.337242   21562 host.go:66] Checking if "ha-607292-m03" exists ...
	I1216 04:46:41.339661   21562 main.go:143] libmachine: domain ha-607292-m03 has defined MAC address 52:54:00:e9:91:5a in network mk-ha-607292
	I1216 04:46:41.340051   21562 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e9:91:5a", ip: ""} in network mk-ha-607292: {Iface:virbr1 ExpiryTime:2025-12-16 05:43:05 +0000 UTC Type:0 Mac:52:54:00:e9:91:5a Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-607292-m03 Clientid:01:52:54:00:e9:91:5a}
	I1216 04:46:41.340071   21562 main.go:143] libmachine: domain ha-607292-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:e9:91:5a in network mk-ha-607292
	I1216 04:46:41.340183   21562 host.go:66] Checking if "ha-607292-m03" exists ...
	I1216 04:46:41.340356   21562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:46:41.342395   21562 main.go:143] libmachine: domain ha-607292-m03 has defined MAC address 52:54:00:e9:91:5a in network mk-ha-607292
	I1216 04:46:41.342716   21562 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e9:91:5a", ip: ""} in network mk-ha-607292: {Iface:virbr1 ExpiryTime:2025-12-16 05:43:05 +0000 UTC Type:0 Mac:52:54:00:e9:91:5a Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:ha-607292-m03 Clientid:01:52:54:00:e9:91:5a}
	I1216 04:46:41.342733   21562 main.go:143] libmachine: domain ha-607292-m03 has defined IP address 192.168.39.26 and MAC address 52:54:00:e9:91:5a in network mk-ha-607292
	I1216 04:46:41.342857   21562 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/ha-607292-m03/id_rsa Username:docker}
	I1216 04:46:41.426192   21562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:46:41.447750   21562 kubeconfig.go:125] found "ha-607292" server: "https://192.168.39.254:8443"
	I1216 04:46:41.447791   21562 api_server.go:166] Checking apiserver status ...
	I1216 04:46:41.447844   21562 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 04:46:41.468479   21562 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1776/cgroup
	W1216 04:46:41.480895   21562 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1776/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 04:46:41.480955   21562 ssh_runner.go:195] Run: ls
	I1216 04:46:41.486361   21562 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1216 04:46:41.491129   21562 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1216 04:46:41.491152   21562 status.go:463] ha-607292-m03 apiserver status = Running (err=<nil>)
	I1216 04:46:41.491163   21562 status.go:176] ha-607292-m03 status: &{Name:ha-607292-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:46:41.491191   21562 status.go:174] checking status of ha-607292-m04 ...
	I1216 04:46:41.492992   21562 status.go:371] ha-607292-m04 host status = "Running" (err=<nil>)
	I1216 04:46:41.493015   21562 host.go:66] Checking if "ha-607292-m04" exists ...
	I1216 04:46:41.496279   21562 main.go:143] libmachine: domain ha-607292-m04 has defined MAC address 52:54:00:3b:cc:32 in network mk-ha-607292
	I1216 04:46:41.496706   21562 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:cc:32", ip: ""} in network mk-ha-607292: {Iface:virbr1 ExpiryTime:2025-12-16 05:44:32 +0000 UTC Type:0 Mac:52:54:00:3b:cc:32 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-607292-m04 Clientid:01:52:54:00:3b:cc:32}
	I1216 04:46:41.496728   21562 main.go:143] libmachine: domain ha-607292-m04 has defined IP address 192.168.39.237 and MAC address 52:54:00:3b:cc:32 in network mk-ha-607292
	I1216 04:46:41.496900   21562 host.go:66] Checking if "ha-607292-m04" exists ...
	I1216 04:46:41.497142   21562 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 04:46:41.499753   21562 main.go:143] libmachine: domain ha-607292-m04 has defined MAC address 52:54:00:3b:cc:32 in network mk-ha-607292
	I1216 04:46:41.500425   21562 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:cc:32", ip: ""} in network mk-ha-607292: {Iface:virbr1 ExpiryTime:2025-12-16 05:44:32 +0000 UTC Type:0 Mac:52:54:00:3b:cc:32 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-607292-m04 Clientid:01:52:54:00:3b:cc:32}
	I1216 04:46:41.500467   21562 main.go:143] libmachine: domain ha-607292-m04 has defined IP address 192.168.39.237 and MAC address 52:54:00:3b:cc:32 in network mk-ha-607292
	I1216 04:46:41.500654   21562 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/ha-607292-m04/id_rsa Username:docker}
	I1216 04:46:41.589899   21562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 04:46:41.607611   21562 status.go:176] ha-607292-m04 status: &{Name:ha-607292-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (87.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 node start m02 --alsologtostderr -v 5
E1216 04:46:58.612913    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 node start m02 --alsologtostderr -v 5: (30.123212768s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (376s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 stop --alsologtostderr -v 5
E1216 04:47:45.570964    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:48:27.159845    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:49:50.227918    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:50:01.705506    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:50:29.413476    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:51:30.912670    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 stop --alsologtostderr -v 5: (4m25.21164872s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 start --wait true --alsologtostderr -v 5
E1216 04:53:27.160527    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 start --wait true --alsologtostderr -v 5: (1m50.646475915s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (376.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 node delete m03 --alsologtostderr -v 5: (17.451960444s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (253.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 stop --alsologtostderr -v 5
E1216 04:55:01.705651    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:56:30.913263    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 04:57:53.975896    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 stop --alsologtostderr -v 5: (4m13.840630455s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5: exit status 7 (61.095785ms)

                                                
                                                
-- stdout --
	ha-607292
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-607292-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-607292-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 04:58:02.595827   24809 out.go:360] Setting OutFile to fd 1 ...
	I1216 04:58:02.595943   24809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:58:02.595955   24809 out.go:374] Setting ErrFile to fd 2...
	I1216 04:58:02.595961   24809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 04:58:02.596351   24809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 04:58:02.596734   24809 out.go:368] Setting JSON to false
	I1216 04:58:02.596795   24809 mustload.go:66] Loading cluster: ha-607292
	I1216 04:58:02.597077   24809 notify.go:221] Checking for updates...
	I1216 04:58:02.597824   24809 config.go:182] Loaded profile config "ha-607292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 04:58:02.597847   24809 status.go:174] checking status of ha-607292 ...
	I1216 04:58:02.599848   24809 status.go:371] ha-607292 host status = "Stopped" (err=<nil>)
	I1216 04:58:02.599862   24809 status.go:384] host is not running, skipping remaining checks
	I1216 04:58:02.599867   24809 status.go:176] ha-607292 status: &{Name:ha-607292 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:58:02.599882   24809 status.go:174] checking status of ha-607292-m02 ...
	I1216 04:58:02.600942   24809 status.go:371] ha-607292-m02 host status = "Stopped" (err=<nil>)
	I1216 04:58:02.600953   24809 status.go:384] host is not running, skipping remaining checks
	I1216 04:58:02.600957   24809 status.go:176] ha-607292-m02 status: &{Name:ha-607292-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 04:58:02.600968   24809 status.go:174] checking status of ha-607292-m04 ...
	I1216 04:58:02.602056   24809 status.go:371] ha-607292-m04 host status = "Stopped" (err=<nil>)
	I1216 04:58:02.602070   24809 status.go:384] host is not running, skipping remaining checks
	I1216 04:58:02.602073   24809 status.go:176] ha-607292-m04 status: &{Name:ha-607292-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (253.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1216 04:58:27.159703    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m39.176658235s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 node add --control-plane --alsologtostderr -v 5
E1216 05:00:01.705523    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-607292 node add --control-plane --alsologtostderr -v 5: (1m15.183525204s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-607292 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-950771 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1216 05:01:24.777028    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:01:30.910239    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-950771 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.744470335s)
--- PASS: TestJSONOutput/start/Command (76.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-950771 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-950771 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-950771 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-950771 --output=json --user=testUser: (6.951737289s)
--- PASS: TestJSONOutput/stop/Command (6.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-852038 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-852038 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.678744ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"71e7fb32-fa17-406d-87b5-3bec8abcb726","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-852038] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d60d556f-a553-4a31-a350-ee40ecd1e2af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22141"}}
	{"specversion":"1.0","id":"ddf975de-51f0-4776-a19f-8181d3a5d222","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0c9bdebc-1944-4366-9e8f-374ee786b867","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig"}}
	{"specversion":"1.0","id":"be127c6b-0a11-49cb-bf1d-197f1146eea8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube"}}
	{"specversion":"1.0","id":"ad7e7139-642d-46d8-9d6e-e9c78814612c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a7aa5e43-5736-4a06-bc76-74d7e912d2c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"76774c36-97e0-4481-9b33-beb4486a9be7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-852038" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-852038
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (83.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-423791 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-423791 --driver=kvm2  --container-runtime=crio: (40.952294872s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-425901 --driver=kvm2  --container-runtime=crio
E1216 05:03:27.161806    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-425901 --driver=kvm2  --container-runtime=crio: (40.396306515s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-423791
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-425901
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-425901" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-425901
helpers_test.go:176: Cleaning up "first-423791" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-423791
--- PASS: TestMinikubeProfile (83.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-975062 --memory=3072 --mount-string /tmp/TestMountStartserial3215189311/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-975062 --memory=3072 --mount-string /tmp/TestMountStartserial3215189311/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.078402184s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-975062 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-975062 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-992148 --memory=3072 --mount-string /tmp/TestMountStartserial3215189311/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-992148 --memory=3072 --mount-string /tmp/TestMountStartserial3215189311/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.929216249s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-992148 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-992148 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-975062 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-992148 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-992148 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-992148
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-992148: (1.311818901s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-992148
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-992148: (20.819509983s)
--- PASS: TestMountStart/serial/RestartStopped (21.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-992148 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-992148 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-581749 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 05:05:01.705915    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:06:30.230581    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:06:30.910051    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-581749 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.848125349s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-581749 -- rollout status deployment/busybox: (4.409572653s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-glvs9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-zctw7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-glvs9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-zctw7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-glvs9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-zctw7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-glvs9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-glvs9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-zctw7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-581749 -- exec busybox-7b57f96db7-zctw7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-581749 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-581749 -v=5 --alsologtostderr: (48.113684059s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-581749 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp testdata/cp-test.txt multinode-581749:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp multinode-581749:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3841203624/001/cp-test_multinode-581749.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp multinode-581749:/home/docker/cp-test.txt multinode-581749-m02:/home/docker/cp-test_multinode-581749_multinode-581749-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m02 "sudo cat /home/docker/cp-test_multinode-581749_multinode-581749-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp multinode-581749:/home/docker/cp-test.txt multinode-581749-m03:/home/docker/cp-test_multinode-581749_multinode-581749-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m03 "sudo cat /home/docker/cp-test_multinode-581749_multinode-581749-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp testdata/cp-test.txt multinode-581749-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp multinode-581749-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3841203624/001/cp-test_multinode-581749-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp multinode-581749-m02:/home/docker/cp-test.txt multinode-581749:/home/docker/cp-test_multinode-581749-m02_multinode-581749.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749 "sudo cat /home/docker/cp-test_multinode-581749-m02_multinode-581749.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp multinode-581749-m02:/home/docker/cp-test.txt multinode-581749-m03:/home/docker/cp-test_multinode-581749-m02_multinode-581749-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m03 "sudo cat /home/docker/cp-test_multinode-581749-m02_multinode-581749-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp testdata/cp-test.txt multinode-581749-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp multinode-581749-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3841203624/001/cp-test_multinode-581749-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp multinode-581749-m03:/home/docker/cp-test.txt multinode-581749:/home/docker/cp-test_multinode-581749-m03_multinode-581749.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749 "sudo cat /home/docker/cp-test_multinode-581749-m03_multinode-581749.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 cp multinode-581749-m03:/home/docker/cp-test.txt multinode-581749-m02:/home/docker/cp-test_multinode-581749-m03_multinode-581749-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 ssh -n multinode-581749-m02 "sudo cat /home/docker/cp-test_multinode-581749-m03_multinode-581749-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-581749 node stop m03: (1.704407053s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-581749 status: exit status 7 (327.007344ms)

                                                
                                                
-- stdout --
	multinode-581749
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-581749-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-581749-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-581749 status --alsologtostderr: exit status 7 (330.602373ms)

                                                
                                                
-- stdout --
	multinode-581749
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-581749-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-581749-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:07:42.629506   30319 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:07:42.629833   30319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:07:42.629843   30319 out.go:374] Setting ErrFile to fd 2...
	I1216 05:07:42.629848   30319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:07:42.630049   30319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:07:42.630217   30319 out.go:368] Setting JSON to false
	I1216 05:07:42.630242   30319 mustload.go:66] Loading cluster: multinode-581749
	I1216 05:07:42.630315   30319 notify.go:221] Checking for updates...
	I1216 05:07:42.630599   30319 config.go:182] Loaded profile config "multinode-581749": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:07:42.630612   30319 status.go:174] checking status of multinode-581749 ...
	I1216 05:07:42.632917   30319 status.go:371] multinode-581749 host status = "Running" (err=<nil>)
	I1216 05:07:42.632939   30319 host.go:66] Checking if "multinode-581749" exists ...
	I1216 05:07:42.635338   30319 main.go:143] libmachine: domain multinode-581749 has defined MAC address 52:54:00:4b:ae:06 in network mk-multinode-581749
	I1216 05:07:42.635812   30319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:ae:06", ip: ""} in network mk-multinode-581749: {Iface:virbr1 ExpiryTime:2025-12-16 06:05:13 +0000 UTC Type:0 Mac:52:54:00:4b:ae:06 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:multinode-581749 Clientid:01:52:54:00:4b:ae:06}
	I1216 05:07:42.635839   30319 main.go:143] libmachine: domain multinode-581749 has defined IP address 192.168.39.91 and MAC address 52:54:00:4b:ae:06 in network mk-multinode-581749
	I1216 05:07:42.636009   30319 host.go:66] Checking if "multinode-581749" exists ...
	I1216 05:07:42.636248   30319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:07:42.638763   30319 main.go:143] libmachine: domain multinode-581749 has defined MAC address 52:54:00:4b:ae:06 in network mk-multinode-581749
	I1216 05:07:42.639180   30319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:ae:06", ip: ""} in network mk-multinode-581749: {Iface:virbr1 ExpiryTime:2025-12-16 06:05:13 +0000 UTC Type:0 Mac:52:54:00:4b:ae:06 Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:multinode-581749 Clientid:01:52:54:00:4b:ae:06}
	I1216 05:07:42.639202   30319 main.go:143] libmachine: domain multinode-581749 has defined IP address 192.168.39.91 and MAC address 52:54:00:4b:ae:06 in network mk-multinode-581749
	I1216 05:07:42.639377   30319 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/multinode-581749/id_rsa Username:docker}
	I1216 05:07:42.717156   30319 ssh_runner.go:195] Run: systemctl --version
	I1216 05:07:42.723601   30319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:07:42.741833   30319 kubeconfig.go:125] found "multinode-581749" server: "https://192.168.39.91:8443"
	I1216 05:07:42.741875   30319 api_server.go:166] Checking apiserver status ...
	I1216 05:07:42.741926   30319 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 05:07:42.762862   30319 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W1216 05:07:42.775223   30319 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 05:07:42.775280   30319 ssh_runner.go:195] Run: ls
	I1216 05:07:42.780746   30319 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I1216 05:07:42.785157   30319 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I1216 05:07:42.785183   30319 status.go:463] multinode-581749 apiserver status = Running (err=<nil>)
	I1216 05:07:42.785193   30319 status.go:176] multinode-581749 status: &{Name:multinode-581749 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:07:42.785216   30319 status.go:174] checking status of multinode-581749-m02 ...
	I1216 05:07:42.786725   30319 status.go:371] multinode-581749-m02 host status = "Running" (err=<nil>)
	I1216 05:07:42.786741   30319 host.go:66] Checking if "multinode-581749-m02" exists ...
	I1216 05:07:42.788861   30319 main.go:143] libmachine: domain multinode-581749-m02 has defined MAC address 52:54:00:7b:bf:b6 in network mk-multinode-581749
	I1216 05:07:42.789230   30319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:bf:b6", ip: ""} in network mk-multinode-581749: {Iface:virbr1 ExpiryTime:2025-12-16 06:06:09 +0000 UTC Type:0 Mac:52:54:00:7b:bf:b6 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-581749-m02 Clientid:01:52:54:00:7b:bf:b6}
	I1216 05:07:42.789254   30319 main.go:143] libmachine: domain multinode-581749-m02 has defined IP address 192.168.39.98 and MAC address 52:54:00:7b:bf:b6 in network mk-multinode-581749
	I1216 05:07:42.789395   30319 host.go:66] Checking if "multinode-581749-m02" exists ...
	I1216 05:07:42.789587   30319 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 05:07:42.791544   30319 main.go:143] libmachine: domain multinode-581749-m02 has defined MAC address 52:54:00:7b:bf:b6 in network mk-multinode-581749
	I1216 05:07:42.791970   30319 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:bf:b6", ip: ""} in network mk-multinode-581749: {Iface:virbr1 ExpiryTime:2025-12-16 06:06:09 +0000 UTC Type:0 Mac:52:54:00:7b:bf:b6 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-581749-m02 Clientid:01:52:54:00:7b:bf:b6}
	I1216 05:07:42.792020   30319 main.go:143] libmachine: domain multinode-581749-m02 has defined IP address 192.168.39.98 and MAC address 52:54:00:7b:bf:b6 in network mk-multinode-581749
	I1216 05:07:42.792229   30319 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22141-5059/.minikube/machines/multinode-581749-m02/id_rsa Username:docker}
	I1216 05:07:42.878765   30319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 05:07:42.899510   30319 status.go:176] multinode-581749-m02 status: &{Name:multinode-581749-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:07:42.899555   30319 status.go:174] checking status of multinode-581749-m03 ...
	I1216 05:07:42.901222   30319 status.go:371] multinode-581749-m03 host status = "Stopped" (err=<nil>)
	I1216 05:07:42.901244   30319 status.go:384] host is not running, skipping remaining checks
	I1216 05:07:42.901251   30319 status.go:176] multinode-581749-m03 status: &{Name:multinode-581749-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-581749 node start m03 -v=5 --alsologtostderr: (41.747111521s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (286.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-581749
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-581749
E1216 05:08:27.160042    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:10:01.705390    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-581749: (2m42.656524697s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-581749 --wait=true -v=5 --alsologtostderr
E1216 05:11:30.910240    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-581749 --wait=true -v=5 --alsologtostderr: (2m3.3445358s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-581749
--- PASS: TestMultiNode/serial/RestartKeepsNodes (286.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-581749 node delete m03: (2.190594605s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (167.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 stop
E1216 05:13:27.164842    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:14:33.979328    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:15:01.706034    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-581749 stop: (2m47.734863515s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-581749 status: exit status 7 (63.186733ms)

                                                
                                                
-- stdout --
	multinode-581749
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-581749-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-581749 status --alsologtostderr: exit status 7 (64.33838ms)

                                                
                                                
-- stdout --
	multinode-581749
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-581749-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:16:01.800665   32671 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:16:01.800967   32671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:16:01.800979   32671 out.go:374] Setting ErrFile to fd 2...
	I1216 05:16:01.800983   32671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:16:01.801243   32671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:16:01.801473   32671 out.go:368] Setting JSON to false
	I1216 05:16:01.801500   32671 mustload.go:66] Loading cluster: multinode-581749
	I1216 05:16:01.801634   32671 notify.go:221] Checking for updates...
	I1216 05:16:01.801988   32671 config.go:182] Loaded profile config "multinode-581749": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:16:01.802005   32671 status.go:174] checking status of multinode-581749 ...
	I1216 05:16:01.804021   32671 status.go:371] multinode-581749 host status = "Stopped" (err=<nil>)
	I1216 05:16:01.804037   32671 status.go:384] host is not running, skipping remaining checks
	I1216 05:16:01.804043   32671 status.go:176] multinode-581749 status: &{Name:multinode-581749 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 05:16:01.804058   32671 status.go:174] checking status of multinode-581749-m02 ...
	I1216 05:16:01.805308   32671 status.go:371] multinode-581749-m02 host status = "Stopped" (err=<nil>)
	I1216 05:16:01.805323   32671 status.go:384] host is not running, skipping remaining checks
	I1216 05:16:01.805328   32671 status.go:176] multinode-581749-m02 status: &{Name:multinode-581749-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (167.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-581749 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 05:16:30.910207    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-581749 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m27.651941269s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-581749 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (88.14s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-581749
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-581749-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-581749-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (84.181414ms)

                                                
                                                
-- stdout --
	* [multinode-581749-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-581749-m02' is duplicated with machine name 'multinode-581749-m02' in profile 'multinode-581749'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-581749-m03 --driver=kvm2  --container-runtime=crio
E1216 05:18:04.781248    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-581749-m03 --driver=kvm2  --container-runtime=crio: (42.042333998s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-581749
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-581749: exit status 80 (222.685066ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-581749 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-581749-m03 already exists in multinode-581749-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-581749-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.26s)

                                                
                                    
x
+
TestScheduledStopUnix (110.67s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-129323 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-129323 --memory=3072 --driver=kvm2  --container-runtime=crio: (39.011101908s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-129323 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 05:21:20.400911   35470 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:21:20.401185   35470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:20.401193   35470 out.go:374] Setting ErrFile to fd 2...
	I1216 05:21:20.401197   35470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:20.401459   35470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:21:20.401741   35470 out.go:368] Setting JSON to false
	I1216 05:21:20.401844   35470 mustload.go:66] Loading cluster: scheduled-stop-129323
	I1216 05:21:20.402158   35470 config.go:182] Loaded profile config "scheduled-stop-129323": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:21:20.402260   35470 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/config.json ...
	I1216 05:21:20.402467   35470 mustload.go:66] Loading cluster: scheduled-stop-129323
	I1216 05:21:20.402577   35470 config.go:182] Loaded profile config "scheduled-stop-129323": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-129323 -n scheduled-stop-129323
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-129323 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 05:21:20.695739   35514 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:21:20.695861   35514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:20.695868   35514 out.go:374] Setting ErrFile to fd 2...
	I1216 05:21:20.695872   35514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:20.696060   35514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:21:20.696281   35514 out.go:368] Setting JSON to false
	I1216 05:21:20.696530   35514 daemonize_unix.go:73] killing process 35503 as it is an old scheduled stop
	I1216 05:21:20.696644   35514 mustload.go:66] Loading cluster: scheduled-stop-129323
	I1216 05:21:20.697074   35514 config.go:182] Loaded profile config "scheduled-stop-129323": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:21:20.697142   35514 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/config.json ...
	I1216 05:21:20.697336   35514 mustload.go:66] Loading cluster: scheduled-stop-129323
	I1216 05:21:20.697429   35514 config.go:182] Loaded profile config "scheduled-stop-129323": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1216 05:21:20.701407    8987 retry.go:31] will retry after 89.466µs: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.702602    8987 retry.go:31] will retry after 124.774µs: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.703758    8987 retry.go:31] will retry after 164.798µs: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.704952    8987 retry.go:31] will retry after 309.905µs: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.706101    8987 retry.go:31] will retry after 544.148µs: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.707242    8987 retry.go:31] will retry after 735.921µs: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.708402    8987 retry.go:31] will retry after 1.104595ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.710602    8987 retry.go:31] will retry after 1.182497ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.712836    8987 retry.go:31] will retry after 2.740767ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.716040    8987 retry.go:31] will retry after 2.120144ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.719301    8987 retry.go:31] will retry after 3.10908ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.723537    8987 retry.go:31] will retry after 12.612637ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.736824    8987 retry.go:31] will retry after 10.742696ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.748105    8987 retry.go:31] will retry after 9.820435ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.758390    8987 retry.go:31] will retry after 28.677106ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
I1216 05:21:20.787657    8987 retry.go:31] will retry after 54.620535ms: open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-129323 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1216 05:21:30.912936    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-129323 -n scheduled-stop-129323
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-129323
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-129323 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 05:21:46.445353   35663 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:21:46.445598   35663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:46.445607   35663 out.go:374] Setting ErrFile to fd 2...
	I1216 05:21:46.445611   35663 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:21:46.445841   35663 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:21:46.446075   35663 out.go:368] Setting JSON to false
	I1216 05:21:46.446149   35663 mustload.go:66] Loading cluster: scheduled-stop-129323
	I1216 05:21:46.446475   35663 config.go:182] Loaded profile config "scheduled-stop-129323": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:21:46.446543   35663 profile.go:143] Saving config to /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/scheduled-stop-129323/config.json ...
	I1216 05:21:46.446737   35663 mustload.go:66] Loading cluster: scheduled-stop-129323
	I1216 05:21:46.446866   35663 config.go:182] Loaded profile config "scheduled-stop-129323": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-129323
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-129323: exit status 7 (62.917869ms)

                                                
                                                
-- stdout --
	scheduled-stop-129323
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-129323 -n scheduled-stop-129323
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-129323 -n scheduled-stop-129323: exit status 7 (60.607036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-129323" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-129323
--- PASS: TestScheduledStopUnix (110.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (333.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3641750595 start -p running-upgrade-667167 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3641750595 start -p running-upgrade-667167 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (59.633768498s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-667167 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-667167 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m29.195409475s)
helpers_test.go:176: Cleaning up "running-upgrade-667167" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-667167
--- PASS: TestRunningBinaryUpgrade (333.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (295.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-319429 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-319429 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.737404473s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-319429
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-319429: (2.128354292s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-319429 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-319429 status --format={{.Host}}: exit status 7 (88.546608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-319429 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-319429 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.445036602s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-319429 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-319429 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-319429 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.187127ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-319429] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-319429
	    minikube start -p kubernetes-upgrade-319429 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3194292 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-319429 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-319429 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1216 05:25:01.705283    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-319429 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m36.685670116s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-319429" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-319429
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-319429: (1.001551567s)
--- PASS: TestKubernetesUpgrade (295.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264730 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-264730 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (98.913508ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-264730] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (103.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264730 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1216 05:23:10.234049    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-264730 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m43.579464725s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-264730 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (103.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264730 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-264730 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (26.255608864s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-264730 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-264730 status -o json: exit status 2 (236.470057ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-264730","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-264730
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-264730: (1.007933814s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (41.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264730 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-264730 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.866937255s)
--- PASS: TestNoKubernetes/serial/Start (41.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22141-5059/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-264730 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-264730 "sudo systemctl is-active --quiet service kubelet": exit status 1 (162.709611ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-264730
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-264730: (1.345637431s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-264730 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-264730 --driver=kvm2  --container-runtime=crio: (41.196645288s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-264730 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-264730 "sudo systemctl is-active --quiet service kubelet": exit status 1 (171.075098ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2624675129 start -p stopped-upgrade-374609 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1216 05:26:30.911365    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2624675129 start -p stopped-upgrade-374609 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m19.257226429s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2624675129 -p stopped-upgrade-374609 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2624675129 -p stopped-upgrade-374609 stop: (1.903636861s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-374609 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-374609 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.406764679s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-764842 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-764842 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (136.232212ms)

                                                
                                                
-- stdout --
	* [false-764842] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22141
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 05:27:30.970954   40288 out.go:360] Setting OutFile to fd 1 ...
	I1216 05:27:30.971339   40288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:27:30.971356   40288 out.go:374] Setting ErrFile to fd 2...
	I1216 05:27:30.971364   40288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 05:27:30.971730   40288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22141-5059/.minikube/bin
	I1216 05:27:30.972477   40288 out.go:368] Setting JSON to false
	I1216 05:27:30.973837   40288 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4193,"bootTime":1765858658,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 05:27:30.973915   40288 start.go:143] virtualization: kvm guest
	I1216 05:27:30.975967   40288 out.go:179] * [false-764842] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 05:27:30.977310   40288 notify.go:221] Checking for updates...
	I1216 05:27:30.977348   40288 out.go:179]   - MINIKUBE_LOCATION=22141
	I1216 05:27:30.978590   40288 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 05:27:30.979869   40288 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22141-5059/kubeconfig
	I1216 05:27:30.981164   40288 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22141-5059/.minikube
	I1216 05:27:30.982391   40288 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 05:27:30.984947   40288 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 05:27:30.986848   40288 config.go:182] Loaded profile config "cert-expiration-843108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 05:27:30.987008   40288 config.go:182] Loaded profile config "running-upgrade-667167": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 05:27:30.987139   40288 config.go:182] Loaded profile config "stopped-upgrade-374609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 05:27:30.987238   40288 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 05:27:31.029888   40288 out.go:179] * Using the kvm2 driver based on user configuration
	I1216 05:27:31.031087   40288 start.go:309] selected driver: kvm2
	I1216 05:27:31.031106   40288 start.go:927] validating driver "kvm2" against <nil>
	I1216 05:27:31.031121   40288 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 05:27:31.033444   40288 out.go:203] 
	W1216 05:27:31.034659   40288 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1216 05:27:31.035908   40288 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-764842 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-764842" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:24:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.156:8443
name: cert-expiration-843108
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:27:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.159:8443
name: running-upgrade-667167
contexts:
- context:
cluster: cert-expiration-843108
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:24:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-843108
name: cert-expiration-843108
- context:
cluster: running-upgrade-667167
user: running-upgrade-667167
name: running-upgrade-667167
current-context: ""
kind: Config
users:
- name: cert-expiration-843108
user:
client-certificate: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/cert-expiration-843108/client.crt
client-key: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/cert-expiration-843108/client.key
- name: running-upgrade-667167
user:
client-certificate: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/running-upgrade-667167/client.crt
client-key: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/running-upgrade-667167/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-764842

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-764842"

                                                
                                                
----------------------- debugLogs end: false-764842 [took: 3.457230453s] --------------------------------
helpers_test.go:176: Cleaning up "false-764842" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-764842
--- PASS: TestNetworkPlugins/group/false (3.80s)

                                                
                                    
x
+
TestISOImage/Setup (41.43s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-312283 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-312283 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.434372444s)
--- PASS: TestISOImage/Setup (41.43s)

                                                
                                    
x
+
TestPause/serial/Start (87.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-928970 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-928970 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m27.397620725s)
--- PASS: TestPause/serial/Start (87.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-374609
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-374609: (1.173468771s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m42.772796157s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.77s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which iptables"
E1216 05:37:42.879213    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/iptables (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.23s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.23s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which wget"
E1216 05:37:41.993277    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/wget (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (98.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1216 05:28:27.160408    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/addons-153066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m38.767057554s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (98.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-xcxgl" [cd3d48cc-d7da-4b83-9489-378025d6a1ee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004855282s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-764842 "pgrep -a kubelet"
I1216 05:30:01.058543    8987 config.go:182] Loaded profile config "auto-764842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-764842 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zm4hq" [76499703-ed90-4533-89f6-63eeb15c3007] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 05:30:01.705533    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-zm4hq" [76499703-ed90-4533-89f6-63eeb15c3007] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00386293s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-764842 "pgrep -a kubelet"
I1216 05:30:05.213481    8987 config.go:182] Loaded profile config "kindnet-764842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-764842 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-6bdhl" [6c48ae23-72d1-4091-93c5-1e5bfa4df520] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-6bdhl" [6c48ae23-72d1-4091-93c5-1e5bfa4df520] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004503515s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-764842 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m15.732018124s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-764842 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m25.740249581s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (118.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1216 05:31:13.980897    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m58.992689244s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (118.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-d5cb8" [5c446ff6-412e-4f85-b059-e19dc55e2775] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1216 05:31:30.911563    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "calico-node-d5cb8" [5c446ff6-412e-4f85-b059-e19dc55e2775] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005204445s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-764842 "pgrep -a kubelet"
I1216 05:31:36.820546    8987 config.go:182] Loaded profile config "calico-764842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-764842 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qnbsc" [86f0f61d-f2b8-41ff-be02-75008f82d077] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qnbsc" [86f0f61d-f2b8-41ff-be02-75008f82d077] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.008003907s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m13.603577289s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-764842 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-764842 "pgrep -a kubelet"
I1216 05:31:53.560083    8987 config.go:182] Loaded profile config "custom-flannel-764842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-764842 replace --force -f testdata/netcat-deployment.yaml
I1216 05:31:54.555455    8987 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5q6xc" [24f22aca-4832-4aa2-89ea-ace9ddb72686] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-5q6xc" [24f22aca-4832-4aa2-89ea-ace9ddb72686] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004825004s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-764842 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.541618201s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-764842 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (101.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-436923 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-436923 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m41.277737064s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (101.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-764842 "pgrep -a kubelet"
I1216 05:32:31.392257    8987 config.go:182] Loaded profile config "enable-default-cni-764842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-764842 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gzssn" [590923d9-de2d-4d1a-a3e5-7d7999271383] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gzssn" [590923d9-de2d-4d1a-a3e5-7d7999271383] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.003786975s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-764842 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-5cd5m" [c3e2b0a0-4eb4-41b0-99c9-d5603f07529c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.013247387s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (98.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-050912 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-050912 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m38.14062665s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (98.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-764842 "pgrep -a kubelet"
I1216 05:33:03.334545    8987 config.go:182] Loaded profile config "flannel-764842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-764842 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xb85q" [7a382b71-8dec-4652-9ee1-29c5e359ad95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-xb85q" [7a382b71-8dec-4652-9ee1-29c5e359ad95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00420814s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-764842 "pgrep -a kubelet"
I1216 05:33:05.996448    8987 config.go:182] Loaded profile config "bridge-764842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-764842 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nnfxw" [7148960a-b7d8-458e-bc4e-26d9507899dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-nnfxw" [7148960a-b7d8-458e-bc4e-26d9507899dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003999038s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-764842 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-764842 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-764842 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-988031 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-988031 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m26.468756358s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-534178 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-534178 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m41.309499999s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (101.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-436923 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0aec1f44-39aa-4324-b3ac-7c692d1553c6] Pending
helpers_test.go:353: "busybox" [0aec1f44-39aa-4324-b3ac-7c692d1553c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0aec1f44-39aa-4324-b3ac-7c692d1553c6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.004205929s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-436923 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-436923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-436923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.6280912s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-436923 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (78.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-436923 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-436923 --alsologtostderr -v=3: (1m18.97766079s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (78.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-050912 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [7ded35e5-82dd-4922-ae3b-2d5e3e1eda34] Pending
helpers_test.go:353: "busybox" [7ded35e5-82dd-4922-ae3b-2d5e3e1eda34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1216 05:34:44.783416    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [7ded35e5-82dd-4922-ae3b-2d5e3e1eda34] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006137224s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-050912 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-050912 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-050912 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.034484742s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-050912 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (88.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-050912 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-050912 --alsologtostderr -v=3: (1m28.569575772s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (88.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-988031 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [402b5ff7-6260-4c9d-b3ca-380b22d3cfe5] Pending
E1216 05:34:59.019188    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:34:59.025645    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:34:59.037095    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:34:59.058540    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:34:59.100106    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:34:59.181590    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [402b5ff7-6260-4c9d-b3ca-380b22d3cfe5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1216 05:34:59.343214    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:34:59.664961    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:00.306853    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.333590    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.340134    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.351628    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.373053    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.414587    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.496041    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.588548    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.658012    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.705509    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-431901/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:01.979424    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:02.621541    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [402b5ff7-6260-4c9d-b3ca-380b22d3cfe5] Running
E1216 05:35:03.903702    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:04.150342    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:06.465973    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003899799s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-988031 exec busybox -- /bin/sh -c "ulimit -n"
E1216 05:35:09.272596    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-988031 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-988031 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (87.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-988031 --alsologtostderr -v=3
E1216 05:35:11.588304    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-988031 --alsologtostderr -v=3: (1m27.173656199s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (87.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-534178 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [08e33d24-efd8-4376-ae21-ba859e3640a7] Pending
helpers_test.go:353: "busybox" [08e33d24-efd8-4376-ae21-ba859e3640a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1216 05:35:19.514020    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [08e33d24-efd8-4376-ae21-ba859e3640a7] Running
E1216 05:35:21.830441    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004723058s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-534178 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-534178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-534178 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (88.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-534178 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-534178 --alsologtostderr -v=3: (1m28.72547923s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (88.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-436923 -n old-k8s-version-436923
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-436923 -n old-k8s-version-436923: exit status 7 (67.700146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-436923 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-436923 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1216 05:35:39.995646    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:35:42.312269    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-436923 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.487705132s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-436923 -n old-k8s-version-436923
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-050912 -n no-preload-050912
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-050912 -n no-preload-050912: exit status 7 (76.386772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-050912 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (57.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-050912 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1216 05:36:20.957357    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/kindnet-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:23.273886    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-050912 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (56.879803552s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-050912 -n no-preload-050912
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (57.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8pff9" [a2211eca-d660-452a-8239-fd2be0781533] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1216 05:36:30.614066    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:30.620571    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:30.631979    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:30.653497    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:30.695364    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:30.777674    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:30.910562    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/functional-448088/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:30.940095    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:31.261937    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8pff9" [a2211eca-d660-452a-8239-fd2be0781533] Running
E1216 05:36:31.904086    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:33.185486    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:35.747459    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.007229259s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-988031 -n embed-certs-988031
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-988031 -n embed-certs-988031: exit status 7 (78.334878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-988031 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-8pff9" [a2211eca-d660-452a-8239-fd2be0781533] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005051409s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-436923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-988031 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1216 05:36:40.869092    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-988031 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (47.420954459s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-988031 -n embed-certs-988031
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-436923 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-436923 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-436923 -n old-k8s-version-436923
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-436923 -n old-k8s-version-436923: exit status 2 (231.842517ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-436923 -n old-k8s-version-436923
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-436923 -n old-k8s-version-436923: exit status 2 (249.07139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-436923 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-436923 -n old-k8s-version-436923
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-436923 -n old-k8s-version-436923
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-724579 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1216 05:36:51.110920    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:54.429121    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:54.435572    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:54.447626    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:54.469037    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:54.511174    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:54.592954    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:54.754213    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:55.076246    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:55.717781    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:36:56.999940    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-724579 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m0.50681991s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-534178 -n default-k8s-diff-port-534178
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-534178 -n default-k8s-diff-port-534178: exit status 7 (89.570773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-534178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (70.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-534178 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1216 05:36:59.561910    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:04.683970    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:11.592242    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:14.925864    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-534178 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m10.296934175s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-534178 -n default-k8s-diff-port-534178
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (70.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-c9t2f" [ff3e9e28-7075-412a-948e-7cc014aab719] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-c9t2f" [ff3e9e28-7075-412a-948e-7cc014aab719] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.062226875s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-kjqxd" [68537c27-4a91-4c73-a349-11ceb63b042c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-kjqxd" [68537c27-4a91-4c73-a349-11ceb63b042c] Running
E1216 05:37:31.740232    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:31.746754    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:31.758196    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:31.779712    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:31.821186    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:31.902841    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:32.064647    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:32.386250    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:33.027918    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:34.309374    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:35.407477    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.005537158s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-c9t2f" [ff3e9e28-7075-412a-948e-7cc014aab719] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003838602s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-050912 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-050912 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-kjqxd" [68537c27-4a91-4c73-a349-11ceb63b042c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01144956s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-988031 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-050912 --alsologtostderr -v=1
E1216 05:37:36.870963    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-050912 --alsologtostderr -v=1: (1.030641021s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-050912 -n no-preload-050912
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-050912 -n no-preload-050912: exit status 2 (243.548272ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-050912 -n no-preload-050912
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-050912 -n no-preload-050912: exit status 2 (253.028834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-050912 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-050912 -n no-preload-050912
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-050912 -n no-preload-050912
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-988031 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-988031 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-988031 --alsologtostderr -v=1: (1.123282164s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-988031 -n embed-certs-988031
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-988031 -n embed-certs-988031: exit status 2 (294.588987ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-988031 -n embed-certs-988031
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-988031 -n embed-certs-988031: exit status 2 (316.044407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-988031 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-988031 --alsologtostderr -v=1: (1.044452625s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-988031 -n embed-certs-988031
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-988031 -n embed-certs-988031
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.59s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.23s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.21s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: 1d20c337b4b256c51c2d46553500e8ea625f1d01
iso_test.go:118:   iso_version: v1.37.0-1765846775-22141
iso_test.go:118:   kicbase_version: v0.0.48-1765661130-22141
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.21s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.38s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-312283 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
E1216 05:37:45.195998    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/auto-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/eBPFSupport (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-724579 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-724579 --alsologtostderr -v=3
E1216 05:37:52.234677    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:52.554416    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/calico-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:57.091914    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:57.098341    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:57.109764    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:57.131312    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:57.173048    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:57.254525    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:57.416111    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-724579 --alsologtostderr -v=3: (8.91224333s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-724579 -n newest-cni-724579
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-724579 -n newest-cni-724579: exit status 7 (72.610749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-724579 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-724579 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1216 05:37:57.737568    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:58.379131    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:37:59.661405    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:02.223571    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:06.294389    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:06.300891    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:06.312384    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:06.333942    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:06.375678    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:06.457278    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:06.619380    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:06.941758    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:07.344898    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:07.583084    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-724579 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (32.019935903s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-724579 -n newest-cni-724579
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9bjcn" [f49a3efd-1f38-4c8d-ab38-610e14b1c9ed] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1216 05:38:08.864877    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9bjcn" [f49a3efd-1f38-4c8d-ab38-610e14b1c9ed] Running
E1216 05:38:11.426789    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:12.716161    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/enable-default-cni-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.004762385s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-9bjcn" [f49a3efd-1f38-4c8d-ab38-610e14b1c9ed] Running
E1216 05:38:16.369665    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/custom-flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:16.548185    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/bridge-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 05:38:17.586907    8987 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/flannel-764842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005362433s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-534178 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-534178 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-534178 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-534178 -n default-k8s-diff-port-534178
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-534178 -n default-k8s-diff-port-534178: exit status 2 (263.796554ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-534178 -n default-k8s-diff-port-534178
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-534178 -n default-k8s-diff-port-534178: exit status 2 (244.494993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-534178 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-534178 -n default-k8s-diff-port-534178
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-534178 -n default-k8s-diff-port-534178
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-724579 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-724579 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-724579 --alsologtostderr -v=1: (1.597565229s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-724579 -n newest-cni-724579
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-724579 -n newest-cni-724579: exit status 2 (263.447294ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-724579 -n newest-cni-724579
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-724579 -n newest-cni-724579: exit status 2 (255.075352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-724579 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-724579 -n newest-cni-724579
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-724579 -n newest-cni-724579
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.58s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.3
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
370 TestNetworkPlugins/group/kubenet 3.66
378 TestNetworkPlugins/group/cilium 4.11
385 TestStartStop/group/disable-driver-mounts 0.22
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-153066 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-764842 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-764842" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:24:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.156:8443
name: cert-expiration-843108
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:27:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.159:8443
name: running-upgrade-667167
contexts:
- context:
cluster: cert-expiration-843108
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:24:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-843108
name: cert-expiration-843108
- context:
cluster: running-upgrade-667167
user: running-upgrade-667167
name: running-upgrade-667167
current-context: ""
kind: Config
users:
- name: cert-expiration-843108
user:
client-certificate: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/cert-expiration-843108/client.crt
client-key: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/cert-expiration-843108/client.key
- name: running-upgrade-667167
user:
client-certificate: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/running-upgrade-667167/client.crt
client-key: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/running-upgrade-667167/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-764842

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-764842"

                                                
                                                
----------------------- debugLogs end: kubenet-764842 [took: 3.49942841s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-764842" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-764842
--- SKIP: TestNetworkPlugins/group/kubenet (3.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-764842 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-764842" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:24:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.156:8443
name: cert-expiration-843108
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22141-5059/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:27:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.159:8443
name: running-upgrade-667167
contexts:
- context:
cluster: cert-expiration-843108
extensions:
- extension:
last-update: Tue, 16 Dec 2025 05:24:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-843108
name: cert-expiration-843108
- context:
cluster: running-upgrade-667167
user: running-upgrade-667167
name: running-upgrade-667167
current-context: ""
kind: Config
users:
- name: cert-expiration-843108
user:
client-certificate: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/cert-expiration-843108/client.crt
client-key: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/cert-expiration-843108/client.key
- name: running-upgrade-667167
user:
client-certificate: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/running-upgrade-667167/client.crt
client-key: /home/jenkins/minikube-integration/22141-5059/.minikube/profiles/running-upgrade-667167/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-764842

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-764842" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-764842"

                                                
                                                
----------------------- debugLogs end: cilium-764842 [took: 3.945691338s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-764842" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-764842
--- SKIP: TestNetworkPlugins/group/cilium (4.11s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-352930" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-352930
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard