Test Report: KVM_Linux_crio 21808

                    
                      db33af8e7a29a5e500790b374373258f8b494afd:2025-12-17:42825
                    
                

Test fail (3/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 157.9
345 TestPreload 149.15
406 TestPause/serial/SecondStartNoReconfiguration 73.7
x
+
TestAddons/parallel/Ingress (157.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-410268 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-410268 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-410268 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [d8c813f3-2dd2-444d-88d8-fe297f907413] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [d8c813f3-2dd2-444d-88d8-fe297f907413] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004654079s
I1217 11:18:06.404834 1349907 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-410268 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.338476471s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-410268 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.28
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-410268 -n addons-410268
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 logs -n 25: (1.099077831s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-783543                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-783543 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ start   │ --download-only -p binary-mirror-200790 --alsologtostderr --binary-mirror http://127.0.0.1:44955 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-200790 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ delete  │ -p binary-mirror-200790                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-200790 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ addons  │ disable dashboard -p addons-410268                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ addons  │ enable dashboard -p addons-410268                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	│ start   │ -p addons-410268 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-410268 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-410268 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ enable headlamp -p addons-410268 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-410268 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-410268 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:17 UTC │
	│ addons  │ addons-410268 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:18 UTC │
	│ addons  │ addons-410268 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:17 UTC │ 17 Dec 25 11:18 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-410268                                                                                                                                                                                                                                                                                                                                                                                         │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
	│ addons  │ addons-410268 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
	│ ip      │ addons-410268 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
	│ addons  │ addons-410268 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
	│ ssh     │ addons-410268 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │                     │
	│ addons  │ addons-410268 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
	│ addons  │ addons-410268 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
	│ ssh     │ addons-410268 ssh cat /opt/local-path-provisioner/pvc-b4fbc5e0-3297-44da-8635-bcba4bc247bc_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
	│ addons  │ addons-410268 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:19 UTC │
	│ addons  │ addons-410268 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
	│ addons  │ addons-410268 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:18 UTC │ 17 Dec 25 11:18 UTC │
	│ ip      │ addons-410268 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-410268        │ jenkins │ v1.37.0 │ 17 Dec 25 11:20 UTC │ 17 Dec 25 11:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:15:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:15:20.592142 1350845 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:15:20.592433 1350845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:15:20.592444 1350845 out.go:374] Setting ErrFile to fd 2...
	I1217 11:15:20.592449 1350845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:15:20.592624 1350845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:15:20.593159 1350845 out.go:368] Setting JSON to false
	I1217 11:15:20.594100 1350845 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17860,"bootTime":1765952261,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:15:20.594163 1350845 start.go:143] virtualization: kvm guest
	I1217 11:15:20.596159 1350845 out.go:179] * [addons-410268] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:15:20.597382 1350845 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:15:20.597414 1350845 notify.go:221] Checking for updates...
	I1217 11:15:20.599656 1350845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:15:20.600910 1350845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 11:15:20.602114 1350845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:15:20.603268 1350845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:15:20.604451 1350845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:15:20.605844 1350845 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:15:20.638414 1350845 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 11:15:20.639570 1350845 start.go:309] selected driver: kvm2
	I1217 11:15:20.639588 1350845 start.go:927] validating driver "kvm2" against <nil>
	I1217 11:15:20.639600 1350845 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:15:20.640355 1350845 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:15:20.640604 1350845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:15:20.640636 1350845 cni.go:84] Creating CNI manager for ""
	I1217 11:15:20.640679 1350845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 11:15:20.640689 1350845 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 11:15:20.640737 1350845 start.go:353] cluster config:
	{Name:addons-410268 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1217 11:15:20.640872 1350845 iso.go:125] acquiring lock: {Name:mkf3f94e126ae38d32753ef0086ea24e79e9b483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:15:20.642225 1350845 out.go:179] * Starting "addons-410268" primary control-plane node in "addons-410268" cluster
	I1217 11:15:20.643270 1350845 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:15:20.643299 1350845 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:15:20.643310 1350845 cache.go:65] Caching tarball of preloaded images
	I1217 11:15:20.643417 1350845 preload.go:238] Found /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 11:15:20.643428 1350845 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 11:15:20.643719 1350845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/config.json ...
	I1217 11:15:20.643744 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/config.json: {Name:mk3d1e0e95208bc322d19bb9e866aad356f15d5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:20.643886 1350845 start.go:360] acquireMachinesLock for addons-410268: {Name:mk7c4b33009a84629d0b15fa1b2a158ad55cf3fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 11:15:20.643942 1350845 start.go:364] duration metric: took 40.602µs to acquireMachinesLock for "addons-410268"
	I1217 11:15:20.643963 1350845 start.go:93] Provisioning new machine with config: &{Name:addons-410268 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:15:20.644059 1350845 start.go:125] createHost starting for "" (driver="kvm2")
	I1217 11:15:20.645460 1350845 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1217 11:15:20.645637 1350845 start.go:159] libmachine.API.Create for "addons-410268" (driver="kvm2")
	I1217 11:15:20.645668 1350845 client.go:173] LocalClient.Create starting
	I1217 11:15:20.645763 1350845 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem
	I1217 11:15:20.743070 1350845 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem
	I1217 11:15:20.891945 1350845 main.go:143] libmachine: creating domain...
	I1217 11:15:20.891971 1350845 main.go:143] libmachine: creating network...
	I1217 11:15:20.893505 1350845 main.go:143] libmachine: found existing default network
	I1217 11:15:20.893712 1350845 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 11:15:20.894311 1350845 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ce6b00}
	I1217 11:15:20.894432 1350845 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-410268</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 11:15:20.901303 1350845 main.go:143] libmachine: creating private network mk-addons-410268 192.168.39.0/24...
	I1217 11:15:20.981162 1350845 main.go:143] libmachine: private network mk-addons-410268 192.168.39.0/24 created
	I1217 11:15:20.981449 1350845 main.go:143] libmachine: <network>
	  <name>mk-addons-410268</name>
	  <uuid>c43dccdc-462d-4763-a28d-df275fc6897f</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:2d:d3:cb'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 11:15:20.981476 1350845 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268 ...
	I1217 11:15:20.981501 1350845 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21808-1345916/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 11:15:20.981512 1350845 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:15:20.981587 1350845 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21808-1345916/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
	I1217 11:15:21.304539 1350845 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa...
	I1217 11:15:21.406148 1350845 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/addons-410268.rawdisk...
	I1217 11:15:21.406198 1350845 main.go:143] libmachine: Writing magic tar header
	I1217 11:15:21.406220 1350845 main.go:143] libmachine: Writing SSH key tar header
	I1217 11:15:21.406301 1350845 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268 ...
	I1217 11:15:21.406375 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268
	I1217 11:15:21.406428 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268 (perms=drwx------)
	I1217 11:15:21.406451 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines
	I1217 11:15:21.406463 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines (perms=drwxr-xr-x)
	I1217 11:15:21.406476 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:15:21.406487 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube (perms=drwxr-xr-x)
	I1217 11:15:21.406498 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916
	I1217 11:15:21.406508 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916 (perms=drwxrwxr-x)
	I1217 11:15:21.406517 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 11:15:21.406544 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 11:15:21.406556 1350845 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 11:15:21.406564 1350845 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 11:15:21.406575 1350845 main.go:143] libmachine: checking permissions on dir: /home
	I1217 11:15:21.406585 1350845 main.go:143] libmachine: skipping /home - not owner
	I1217 11:15:21.406589 1350845 main.go:143] libmachine: defining domain...
	I1217 11:15:21.407959 1350845 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-410268</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/addons-410268.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-410268'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 11:15:21.414288 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:b5:5c:a9 in network default
	I1217 11:15:21.414861 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:21.414878 1350845 main.go:143] libmachine: starting domain...
	I1217 11:15:21.414883 1350845 main.go:143] libmachine: ensuring networks are active...
	I1217 11:15:21.415719 1350845 main.go:143] libmachine: Ensuring network default is active
	I1217 11:15:21.416183 1350845 main.go:143] libmachine: Ensuring network mk-addons-410268 is active
	I1217 11:15:21.416757 1350845 main.go:143] libmachine: getting domain XML...
	I1217 11:15:21.417848 1350845 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-410268</name>
	  <uuid>7773aa72-69d0-4e14-8c7e-331a57e11558</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/addons-410268.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:35:b5:14'/>
	      <source network='mk-addons-410268'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b5:5c:a9'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 11:15:22.749802 1350845 main.go:143] libmachine: waiting for domain to start...
	I1217 11:15:22.751357 1350845 main.go:143] libmachine: domain is now running
	I1217 11:15:22.751378 1350845 main.go:143] libmachine: waiting for IP...
	I1217 11:15:22.752321 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:22.752962 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:22.752977 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:22.753289 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:22.753337 1350845 retry.go:31] will retry after 261.058009ms: waiting for domain to come up
	I1217 11:15:23.016027 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:23.016778 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:23.016795 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:23.017150 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:23.017195 1350845 retry.go:31] will retry after 249.311618ms: waiting for domain to come up
	I1217 11:15:23.268053 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:23.268786 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:23.268819 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:23.269192 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:23.269245 1350845 retry.go:31] will retry after 438.21381ms: waiting for domain to come up
	I1217 11:15:23.709022 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:23.709527 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:23.709546 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:23.709933 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:23.709970 1350845 retry.go:31] will retry after 605.827989ms: waiting for domain to come up
	I1217 11:15:24.317961 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:24.318552 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:24.318574 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:24.318975 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:24.319028 1350845 retry.go:31] will retry after 647.608813ms: waiting for domain to come up
	I1217 11:15:24.967974 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:24.968640 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:24.968680 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:24.969073 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:24.969123 1350845 retry.go:31] will retry after 765.154906ms: waiting for domain to come up
	I1217 11:15:25.735950 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:25.736567 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:25.736581 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:25.736902 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:25.736944 1350845 retry.go:31] will retry after 858.001615ms: waiting for domain to come up
	I1217 11:15:26.597164 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:26.597750 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:26.597767 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:26.598173 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:26.598218 1350845 retry.go:31] will retry after 1.003617568s: waiting for domain to come up
	I1217 11:15:27.603763 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:27.604426 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:27.604454 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:27.604903 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:27.604951 1350845 retry.go:31] will retry after 1.483692995s: waiting for domain to come up
	I1217 11:15:29.090763 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:29.091460 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:29.091475 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:29.091852 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:29.091945 1350845 retry.go:31] will retry after 2.269901769s: waiting for domain to come up
	I1217 11:15:31.363369 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:31.364044 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:31.364076 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:31.364462 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:31.364511 1350845 retry.go:31] will retry after 2.857776026s: waiting for domain to come up
	I1217 11:15:34.225497 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:34.226028 1350845 main.go:143] libmachine: no network interface addresses found for domain addons-410268 (source=lease)
	I1217 11:15:34.226044 1350845 main.go:143] libmachine: trying to list again with source=arp
	I1217 11:15:34.226371 1350845 main.go:143] libmachine: unable to find current IP address of domain addons-410268 in network mk-addons-410268 (interfaces detected: [])
	I1217 11:15:34.226407 1350845 retry.go:31] will retry after 2.523641006s: waiting for domain to come up
	I1217 11:15:36.752165 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:36.752781 1350845 main.go:143] libmachine: domain addons-410268 has current primary IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:36.752799 1350845 main.go:143] libmachine: found domain IP: 192.168.39.28
	I1217 11:15:36.752808 1350845 main.go:143] libmachine: reserving static IP address...
	I1217 11:15:36.753293 1350845 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-410268", mac: "52:54:00:35:b5:14", ip: "192.168.39.28"} in network mk-addons-410268
	I1217 11:15:36.966773 1350845 main.go:143] libmachine: reserved static IP address 192.168.39.28 for domain addons-410268
	I1217 11:15:36.966811 1350845 main.go:143] libmachine: waiting for SSH...
	I1217 11:15:36.966817 1350845 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 11:15:36.969915 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:36.970400 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:minikube Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:36.970429 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:36.970624 1350845 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:36.970842 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1217 11:15:36.970855 1350845 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 11:15:37.083230 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:15:37.083690 1350845 main.go:143] libmachine: domain creation complete
	I1217 11:15:37.085486 1350845 machine.go:94] provisionDockerMachine start ...
	I1217 11:15:37.087875 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.088405 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:37.088442 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.088645 1350845 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:37.088901 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1217 11:15:37.088919 1350845 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 11:15:37.199594 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 11:15:37.199630 1350845 buildroot.go:166] provisioning hostname "addons-410268"
	I1217 11:15:37.202546 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.202898 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:37.202934 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.203153 1350845 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:37.203362 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1217 11:15:37.203373 1350845 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-410268 && echo "addons-410268" | sudo tee /etc/hostname
	I1217 11:15:37.330802 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-410268
	
	I1217 11:15:37.333935 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.334380 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:37.334414 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.334576 1350845 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:37.334827 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1217 11:15:37.334847 1350845 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-410268' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-410268/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-410268' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 11:15:37.457116 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 11:15:37.457148 1350845 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1345916/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1345916/.minikube}
	I1217 11:15:37.457175 1350845 buildroot.go:174] setting up certificates
	I1217 11:15:37.457199 1350845 provision.go:84] configureAuth start
	I1217 11:15:37.460061 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.460552 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:37.460593 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.462938 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.463326 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:37.463353 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.463519 1350845 provision.go:143] copyHostCerts
	I1217 11:15:37.463618 1350845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem (1082 bytes)
	I1217 11:15:37.463893 1350845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem (1123 bytes)
	I1217 11:15:37.464097 1350845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem (1675 bytes)
	I1217 11:15:37.464206 1350845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem org=jenkins.addons-410268 san=[127.0.0.1 192.168.39.28 addons-410268 localhost minikube]
	I1217 11:15:37.539990 1350845 provision.go:177] copyRemoteCerts
	I1217 11:15:37.540073 1350845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 11:15:37.542671 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.543086 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:37.543114 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.543273 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:15:37.629316 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 11:15:37.655732 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 11:15:37.681614 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 11:15:37.708265 1350845 provision.go:87] duration metric: took 251.048973ms to configureAuth
	I1217 11:15:37.708292 1350845 buildroot.go:189] setting minikube options for container-runtime
	I1217 11:15:37.708493 1350845 config.go:182] Loaded profile config "addons-410268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:15:37.711190 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.711570 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:37.711604 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.711764 1350845 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:37.711995 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1217 11:15:37.712015 1350845 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 11:15:37.948745 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 11:15:37.948780 1350845 machine.go:97] duration metric: took 863.272475ms to provisionDockerMachine
	I1217 11:15:37.948795 1350845 client.go:176] duration metric: took 17.303118011s to LocalClient.Create
	I1217 11:15:37.948819 1350845 start.go:167] duration metric: took 17.303180981s to libmachine.API.Create "addons-410268"
	I1217 11:15:37.948827 1350845 start.go:293] postStartSetup for "addons-410268" (driver="kvm2")
	I1217 11:15:37.948849 1350845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 11:15:37.948938 1350845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 11:15:37.952476 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.953079 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:37.953109 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:37.953318 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:15:38.040194 1350845 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 11:15:38.045641 1350845 info.go:137] Remote host: Buildroot 2025.02
	I1217 11:15:38.045671 1350845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/addons for local assets ...
	I1217 11:15:38.045768 1350845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/files for local assets ...
	I1217 11:15:38.045805 1350845 start.go:296] duration metric: took 96.971166ms for postStartSetup
	I1217 11:15:38.048732 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.049146 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:38.049179 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.049388 1350845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/config.json ...
	I1217 11:15:38.049575 1350845 start.go:128] duration metric: took 17.405503986s to createHost
	I1217 11:15:38.052486 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.053692 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:38.053720 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.053905 1350845 main.go:143] libmachine: Using SSH client type: native
	I1217 11:15:38.054105 1350845 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1217 11:15:38.054127 1350845 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 11:15:38.170852 1350845 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765970138.128366888
	
	I1217 11:15:38.170878 1350845 fix.go:216] guest clock: 1765970138.128366888
	I1217 11:15:38.170888 1350845 fix.go:229] Guest: 2025-12-17 11:15:38.128366888 +0000 UTC Remote: 2025-12-17 11:15:38.049587758 +0000 UTC m=+17.508960444 (delta=78.77913ms)
	I1217 11:15:38.170911 1350845 fix.go:200] guest clock delta is within tolerance: 78.77913ms
	I1217 11:15:38.170918 1350845 start.go:83] releasing machines lock for "addons-410268", held for 17.526964872s
	I1217 11:15:38.173738 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.174192 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:38.174227 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.174468 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:15:38.174512 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:15:38.174539 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:15:38.174563 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	W1217 11:15:38.174630 1350845 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt: no such file or directory
	I1217 11:15:38.175059 1350845 ssh_runner.go:195] Run: cat /version.json
	I1217 11:15:38.175136 1350845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 11:15:38.178170 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.178339 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.178602 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:38.178626 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.178741 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:38.178765 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:38.178791 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:15:38.179002 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:15:38.262085 1350845 ssh_runner.go:195] Run: systemctl --version
	I1217 11:15:38.295594 1350845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 11:15:38.450340 1350845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 11:15:38.457211 1350845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 11:15:38.457297 1350845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 11:15:38.476258 1350845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 11:15:38.476288 1350845 start.go:496] detecting cgroup driver to use...
	I1217 11:15:38.476363 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 11:15:38.495138 1350845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 11:15:38.510957 1350845 docker.go:218] disabling cri-docker service (if available) ...
	I1217 11:15:38.511041 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 11:15:38.528272 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 11:15:38.543641 1350845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 11:15:38.687153 1350845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 11:15:38.894865 1350845 docker.go:234] disabling docker service ...
	I1217 11:15:38.894938 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 11:15:38.910577 1350845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 11:15:38.924533 1350845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 11:15:39.079546 1350845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 11:15:39.216531 1350845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 11:15:39.232295 1350845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 11:15:39.253427 1350845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 11:15:39.253562 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:39.264897 1350845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 11:15:39.264989 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:39.276277 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:39.287433 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:39.298410 1350845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 11:15:39.310245 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:39.321297 1350845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:39.340448 1350845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 11:15:39.351627 1350845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 11:15:39.360944 1350845 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 11:15:39.361048 1350845 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 11:15:39.380732 1350845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 11:15:39.393178 1350845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:15:39.528134 1350845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 11:15:39.641833 1350845 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 11:15:39.641931 1350845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 11:15:39.647555 1350845 start.go:564] Will wait 60s for crictl version
	I1217 11:15:39.647614 1350845 ssh_runner.go:195] Run: which crictl
	I1217 11:15:39.651367 1350845 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 11:15:39.682101 1350845 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 11:15:39.682201 1350845 ssh_runner.go:195] Run: crio --version
	I1217 11:15:39.708561 1350845 ssh_runner.go:195] Run: crio --version
	I1217 11:15:39.735661 1350845 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 11:15:39.739497 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:39.739913 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:15:39.739937 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:15:39.740138 1350845 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 11:15:39.744417 1350845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:15:39.758566 1350845 kubeadm.go:884] updating cluster {Name:addons-410268 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
3 ClusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 11:15:39.758708 1350845 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:15:39.758788 1350845 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:15:39.786378 1350845 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1217 11:15:39.786447 1350845 ssh_runner.go:195] Run: which lz4
	I1217 11:15:39.790528 1350845 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 11:15:39.794847 1350845 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 11:15:39.794883 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1217 11:15:40.941493 1350845 crio.go:462] duration metric: took 1.150980208s to copy over tarball
	I1217 11:15:40.941601 1350845 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 11:15:42.406837 1350845 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.465192902s)
	I1217 11:15:42.406885 1350845 crio.go:469] duration metric: took 1.465353127s to extract the tarball
	I1217 11:15:42.406898 1350845 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 11:15:42.442447 1350845 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 11:15:42.480775 1350845 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 11:15:42.480799 1350845 cache_images.go:86] Images are preloaded, skipping loading
	I1217 11:15:42.480807 1350845 kubeadm.go:935] updating node { 192.168.39.28 8443 v1.34.3 crio true true} ...
	I1217 11:15:42.480897 1350845 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-410268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 11:15:42.481002 1350845 ssh_runner.go:195] Run: crio config
	I1217 11:15:42.525105 1350845 cni.go:84] Creating CNI manager for ""
	I1217 11:15:42.525134 1350845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 11:15:42.525156 1350845 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 11:15:42.525186 1350845 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.28 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-410268 NodeName:addons-410268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 11:15:42.525314 1350845 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-410268"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.28"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.28"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 11:15:42.525415 1350845 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 11:15:42.537907 1350845 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 11:15:42.538001 1350845 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 11:15:42.549456 1350845 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1217 11:15:42.569830 1350845 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 11:15:42.589621 1350845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1217 11:15:42.610114 1350845 ssh_runner.go:195] Run: grep 192.168.39.28	control-plane.minikube.internal$ /etc/hosts
	I1217 11:15:42.614161 1350845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 11:15:42.631453 1350845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:15:42.777216 1350845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:15:42.807755 1350845 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268 for IP: 192.168.39.28
	I1217 11:15:42.807787 1350845 certs.go:195] generating shared ca certs ...
	I1217 11:15:42.807812 1350845 certs.go:227] acquiring lock for ca certs: {Name:mk7dff4294abcbe4af041891799d61c459798c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:42.808016 1350845 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key
	I1217 11:15:42.917178 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt ...
	I1217 11:15:42.917218 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt: {Name:mk924e10cdeab37a6839cfe0bd545c6ef1af1151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:42.917406 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key ...
	I1217 11:15:42.917419 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key: {Name:mk71344f89d4a5b6338f9f1dcf1de80ad0eb74b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:42.917493 1350845 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key
	I1217 11:15:42.943644 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt ...
	I1217 11:15:42.943676 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt: {Name:mkac74a4090a4cfd9810679a72eb27b16dcbc70f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:42.943845 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key ...
	I1217 11:15:42.943857 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key: {Name:mk8c3ff93ea81b44b2dfb1c45d13eea2b0341cb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:42.943927 1350845 certs.go:257] generating profile certs ...
	I1217 11:15:42.944006 1350845 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.key
	I1217 11:15:42.944030 1350845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt with IP's: []
	I1217 11:15:43.134502 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt ...
	I1217 11:15:43.134538 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: {Name:mk5424d0d2090e412eb1218c16143dc04c000352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:43.134770 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.key ...
	I1217 11:15:43.134787 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.key: {Name:mk523ac9e78d3d64f6a3a3c09323f75212a30bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:43.134916 1350845 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key.8b3aaf35
	I1217 11:15:43.134939 1350845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt.8b3aaf35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.28]
	I1217 11:15:43.173860 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt.8b3aaf35 ...
	I1217 11:15:43.173892 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt.8b3aaf35: {Name:mk73e9f29099141c309fd594f0cc386347876e61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:43.174126 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key.8b3aaf35 ...
	I1217 11:15:43.174148 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key.8b3aaf35: {Name:mk6f66f6d6fe4b58e3f2eb4739723a42f05d6e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:43.174265 1350845 certs.go:382] copying /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt.8b3aaf35 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt
	I1217 11:15:43.174357 1350845 certs.go:386] copying /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key.8b3aaf35 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key
	I1217 11:15:43.174411 1350845 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.key
	I1217 11:15:43.174437 1350845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.crt with IP's: []
	I1217 11:15:43.329546 1350845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.crt ...
	I1217 11:15:43.329581 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.crt: {Name:mk6ea6acb6ee7459e3182ed91ab2506f933c6bf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:43.329807 1350845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.key ...
	I1217 11:15:43.329828 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.key: {Name:mkf387330626a1f9c0557f85211ad7b7066f7816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:43.330070 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 11:15:43.330151 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 11:15:43.330180 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 11:15:43.330206 1350845 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 11:15:43.330796 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 11:15:43.361157 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 11:15:43.390584 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 11:15:43.421164 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 11:15:43.449959 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 11:15:43.493268 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 11:15:43.528368 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 11:15:43.557442 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1217 11:15:43.586386 1350845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 11:15:43.616919 1350845 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 11:15:43.638028 1350845 ssh_runner.go:195] Run: openssl version
	I1217 11:15:43.644308 1350845 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:43.655594 1350845 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 11:15:43.667095 1350845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:43.672175 1350845 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:43.672235 1350845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 11:15:43.679759 1350845 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 11:15:43.691695 1350845 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 11:15:43.703669 1350845 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 11:15:43.708405 1350845 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 11:15:43.708484 1350845 kubeadm.go:401] StartCluster: {Name:addons-410268 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 C
lusterName:addons-410268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:15:43.708562 1350845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 11:15:43.708615 1350845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 11:15:43.740471 1350845 cri.go:89] found id: ""
	I1217 11:15:43.740553 1350845 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 11:15:43.752356 1350845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 11:15:43.763938 1350845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 11:15:43.777880 1350845 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 11:15:43.777918 1350845 kubeadm.go:158] found existing configuration files:
	
	I1217 11:15:43.778010 1350845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 11:15:43.790999 1350845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 11:15:43.791096 1350845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 11:15:43.802800 1350845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 11:15:43.813617 1350845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 11:15:43.813701 1350845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 11:15:43.825578 1350845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 11:15:43.836395 1350845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 11:15:43.836495 1350845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 11:15:43.847921 1350845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 11:15:43.858810 1350845 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 11:15:43.858895 1350845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 11:15:43.870185 1350845 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 11:15:44.005684 1350845 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 11:15:55.523132 1350845 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 11:15:55.523210 1350845 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 11:15:55.523301 1350845 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 11:15:55.523417 1350845 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 11:15:55.523541 1350845 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 11:15:55.523649 1350845 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 11:15:55.525182 1350845 out.go:252]   - Generating certificates and keys ...
	I1217 11:15:55.525296 1350845 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 11:15:55.525371 1350845 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 11:15:55.525461 1350845 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 11:15:55.525557 1350845 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 11:15:55.525632 1350845 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 11:15:55.525679 1350845 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 11:15:55.525729 1350845 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 11:15:55.525874 1350845 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-410268 localhost] and IPs [192.168.39.28 127.0.0.1 ::1]
	I1217 11:15:55.525964 1350845 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 11:15:55.526122 1350845 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-410268 localhost] and IPs [192.168.39.28 127.0.0.1 ::1]
	I1217 11:15:55.526180 1350845 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 11:15:55.526255 1350845 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 11:15:55.526294 1350845 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 11:15:55.526336 1350845 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 11:15:55.526375 1350845 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 11:15:55.526447 1350845 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 11:15:55.526513 1350845 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 11:15:55.526566 1350845 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 11:15:55.526659 1350845 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 11:15:55.526776 1350845 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 11:15:55.526870 1350845 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 11:15:55.528161 1350845 out.go:252]   - Booting up control plane ...
	I1217 11:15:55.528262 1350845 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 11:15:55.528349 1350845 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 11:15:55.528429 1350845 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 11:15:55.528544 1350845 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 11:15:55.528668 1350845 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 11:15:55.528820 1350845 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 11:15:55.528951 1350845 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 11:15:55.529031 1350845 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 11:15:55.529158 1350845 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 11:15:55.529319 1350845 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 11:15:55.529403 1350845 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002568436s
	I1217 11:15:55.529516 1350845 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 11:15:55.529613 1350845 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.28:8443/livez
	I1217 11:15:55.529699 1350845 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 11:15:55.529774 1350845 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 11:15:55.529839 1350845 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.263479249s
	I1217 11:15:55.529896 1350845 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.536625223s
	I1217 11:15:55.529955 1350845 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501854635s
	I1217 11:15:55.530062 1350845 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 11:15:55.530234 1350845 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 11:15:55.530302 1350845 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 11:15:55.530525 1350845 kubeadm.go:319] [mark-control-plane] Marking the node addons-410268 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 11:15:55.530591 1350845 kubeadm.go:319] [bootstrap-token] Using token: 43l6ve.l582r2mo3awbrhao
	I1217 11:15:55.532627 1350845 out.go:252]   - Configuring RBAC rules ...
	I1217 11:15:55.532727 1350845 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 11:15:55.532804 1350845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 11:15:55.532927 1350845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 11:15:55.533086 1350845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 11:15:55.533294 1350845 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 11:15:55.533425 1350845 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 11:15:55.533566 1350845 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 11:15:55.533633 1350845 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 11:15:55.533696 1350845 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 11:15:55.533705 1350845 kubeadm.go:319] 
	I1217 11:15:55.533791 1350845 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 11:15:55.533800 1350845 kubeadm.go:319] 
	I1217 11:15:55.533903 1350845 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 11:15:55.533913 1350845 kubeadm.go:319] 
	I1217 11:15:55.533951 1350845 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 11:15:55.534056 1350845 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 11:15:55.534143 1350845 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 11:15:55.534154 1350845 kubeadm.go:319] 
	I1217 11:15:55.534233 1350845 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 11:15:55.534246 1350845 kubeadm.go:319] 
	I1217 11:15:55.534314 1350845 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 11:15:55.534322 1350845 kubeadm.go:319] 
	I1217 11:15:55.534381 1350845 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 11:15:55.534451 1350845 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 11:15:55.534536 1350845 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 11:15:55.534547 1350845 kubeadm.go:319] 
	I1217 11:15:55.534656 1350845 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 11:15:55.534757 1350845 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 11:15:55.534771 1350845 kubeadm.go:319] 
	I1217 11:15:55.534892 1350845 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 43l6ve.l582r2mo3awbrhao \
	I1217 11:15:55.535006 1350845 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:03d71c4919b4a2b722377932ade21f7a19ec06bb9a5b5ca567ebf14ade8ad6b0 \
	I1217 11:15:55.535029 1350845 kubeadm.go:319] 	--control-plane 
	I1217 11:15:55.535033 1350845 kubeadm.go:319] 
	I1217 11:15:55.535109 1350845 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 11:15:55.535118 1350845 kubeadm.go:319] 
	I1217 11:15:55.535193 1350845 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 43l6ve.l582r2mo3awbrhao \
	I1217 11:15:55.535298 1350845 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:03d71c4919b4a2b722377932ade21f7a19ec06bb9a5b5ca567ebf14ade8ad6b0 
	I1217 11:15:55.535317 1350845 cni.go:84] Creating CNI manager for ""
	I1217 11:15:55.535329 1350845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 11:15:55.537517 1350845 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 11:15:55.538705 1350845 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 11:15:55.551978 1350845 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 11:15:55.577547 1350845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 11:15:55.577626 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:55.577702 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-410268 minikube.k8s.io/updated_at=2025_12_17T11_15_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=addons-410268 minikube.k8s.io/primary=true
	I1217 11:15:55.623901 1350845 ops.go:34] apiserver oom_adj: -16
	I1217 11:15:55.726309 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:56.227208 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:56.727180 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:57.227401 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:57.727100 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:58.226689 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:58.727262 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:59.226401 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:15:59.727304 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:16:00.227368 1350845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 11:16:00.338042 1350845 kubeadm.go:1114] duration metric: took 4.760491571s to wait for elevateKubeSystemPrivileges
	I1217 11:16:00.338080 1350845 kubeadm.go:403] duration metric: took 16.629604919s to StartCluster
	I1217 11:16:00.338102 1350845 settings.go:142] acquiring lock: {Name:mkab196c8ac23f97b54763cecaa5ac5ac8f7dd0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:16:00.338257 1350845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 11:16:00.338838 1350845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/kubeconfig: {Name:mkf9f7ccd4382c7fd64f6772f4fae6c99a70cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:16:00.339139 1350845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 11:16:00.339160 1350845 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 11:16:00.339131 1350845 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 11:16:00.339263 1350845 addons.go:70] Setting yakd=true in profile "addons-410268"
	I1217 11:16:00.339272 1350845 addons.go:70] Setting default-storageclass=true in profile "addons-410268"
	I1217 11:16:00.339281 1350845 addons.go:239] Setting addon yakd=true in "addons-410268"
	I1217 11:16:00.339284 1350845 addons.go:70] Setting inspektor-gadget=true in profile "addons-410268"
	I1217 11:16:00.339324 1350845 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-410268"
	I1217 11:16:00.339329 1350845 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-410268"
	I1217 11:16:00.339340 1350845 addons.go:70] Setting ingress=true in profile "addons-410268"
	I1217 11:16:00.339349 1350845 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-410268"
	I1217 11:16:00.339352 1350845 addons.go:70] Setting registry=true in profile "addons-410268"
	I1217 11:16:00.339361 1350845 addons.go:239] Setting addon ingress=true in "addons-410268"
	I1217 11:16:00.339369 1350845 addons.go:239] Setting addon registry=true in "addons-410268"
	I1217 11:16:00.339372 1350845 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-410268"
	I1217 11:16:00.339400 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.339406 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.339421 1350845 config.go:182] Loaded profile config "addons-410268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:16:00.339434 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.339469 1350845 addons.go:70] Setting metrics-server=true in profile "addons-410268"
	I1217 11:16:00.339482 1350845 addons.go:239] Setting addon metrics-server=true in "addons-410268"
	I1217 11:16:00.339501 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.339295 1350845 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-410268"
	I1217 11:16:00.339307 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.340435 1350845 addons.go:70] Setting ingress-dns=true in profile "addons-410268"
	I1217 11:16:00.340492 1350845 addons.go:239] Setting addon ingress-dns=true in "addons-410268"
	I1217 11:16:00.340535 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.340763 1350845 addons.go:70] Setting volcano=true in profile "addons-410268"
	I1217 11:16:00.339304 1350845 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-410268"
	I1217 11:16:00.340794 1350845 addons.go:239] Setting addon volcano=true in "addons-410268"
	I1217 11:16:00.340799 1350845 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-410268"
	I1217 11:16:00.340828 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.339318 1350845 addons.go:239] Setting addon inspektor-gadget=true in "addons-410268"
	I1217 11:16:00.340849 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.340830 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.341293 1350845 out.go:179] * Verifying Kubernetes components...
	I1217 11:16:00.339372 1350845 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-410268"
	I1217 11:16:00.341503 1350845 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-410268"
	I1217 11:16:00.341530 1350845 addons.go:70] Setting volumesnapshots=true in profile "addons-410268"
	I1217 11:16:00.341544 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.341547 1350845 addons.go:239] Setting addon volumesnapshots=true in "addons-410268"
	I1217 11:16:00.341572 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.339315 1350845 addons.go:70] Setting storage-provisioner=true in profile "addons-410268"
	I1217 11:16:00.339318 1350845 addons.go:70] Setting cloud-spanner=true in profile "addons-410268"
	I1217 11:16:00.341928 1350845 addons.go:239] Setting addon cloud-spanner=true in "addons-410268"
	I1217 11:16:00.341976 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.339328 1350845 addons.go:70] Setting gcp-auth=true in profile "addons-410268"
	I1217 11:16:00.342027 1350845 mustload.go:66] Loading cluster: addons-410268
	I1217 11:16:00.339332 1350845 addons.go:70] Setting registry-creds=true in profile "addons-410268"
	I1217 11:16:00.342061 1350845 addons.go:239] Setting addon registry-creds=true in "addons-410268"
	I1217 11:16:00.342093 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.342245 1350845 config.go:182] Loaded profile config "addons-410268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:16:00.341905 1350845 addons.go:239] Setting addon storage-provisioner=true in "addons-410268"
	I1217 11:16:00.342340 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.343127 1350845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 11:16:00.347113 1350845 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 11:16:00.347233 1350845 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 11:16:00.347928 1350845 addons.go:239] Setting addon default-storageclass=true in "addons-410268"
	I1217 11:16:00.347928 1350845 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-410268"
	I1217 11:16:00.348043 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.348000 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.348345 1350845 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 11:16:00.348364 1350845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 11:16:00.348405 1350845 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 11:16:00.349587 1350845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 11:16:00.349589 1350845 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	W1217 11:16:00.349672 1350845 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 11:16:00.349716 1350845 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 11:16:00.349728 1350845 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 11:16:00.350777 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 11:16:00.350797 1350845 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 11:16:00.351538 1350845 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 11:16:00.351558 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 11:16:00.351667 1350845 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 11:16:00.352061 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:00.352091 1350845 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 11:16:00.352094 1350845 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 11:16:00.352100 1350845 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 11:16:00.352108 1350845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 11:16:00.352091 1350845 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 11:16:00.352123 1350845 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 11:16:00.352151 1350845 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 11:16:00.352882 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 11:16:00.352954 1350845 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 11:16:00.352463 1350845 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 11:16:00.353725 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 11:16:00.353744 1350845 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 11:16:00.353777 1350845 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 11:16:00.354105 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 11:16:00.353902 1350845 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 11:16:00.354274 1350845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 11:16:00.354503 1350845 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 11:16:00.354511 1350845 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 11:16:00.354524 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 11:16:00.354530 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 11:16:00.354538 1350845 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 11:16:00.354550 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 11:16:00.354706 1350845 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 11:16:00.354735 1350845 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:16:00.355075 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 11:16:00.355514 1350845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 11:16:00.355556 1350845 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 11:16:00.356042 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 11:16:00.356406 1350845 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 11:16:00.356542 1350845 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 11:16:00.356832 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 11:16:00.358578 1350845 out.go:179]   - Using image docker.io/busybox:stable
	I1217 11:16:00.358599 1350845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 11:16:00.359115 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.359783 1350845 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 11:16:00.359800 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 11:16:00.360824 1350845 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 11:16:00.361440 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.361475 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.362292 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.362395 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.362903 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.363152 1350845 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 11:16:00.364088 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.364461 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.364498 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.364860 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.364916 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.365046 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.365322 1350845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 11:16:00.365794 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.365928 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.365967 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.366436 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.366547 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.366638 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.367725 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.367766 1350845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 11:16:00.367854 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.368127 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.368442 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.368479 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.368744 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.368773 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.368803 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.368880 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 11:16:00.368898 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 11:16:00.368915 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.369113 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.369720 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.369744 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.369813 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.369844 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.369892 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.369860 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.370036 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.370129 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.370166 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.370165 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.370187 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.370283 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.370307 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.370504 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.370676 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.370941 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.370944 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.371010 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.371199 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.371224 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.371511 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.371549 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.371543 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.371781 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.372169 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.372753 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.372786 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.372959 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:00.374371 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.374831 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:00.374851 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:00.375030 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	W1217 11:16:00.746241 1350845 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37266->192.168.39.28:22: read: connection reset by peer
	I1217 11:16:00.746304 1350845 retry.go:31] will retry after 298.461677ms: ssh: handshake failed: read tcp 192.168.39.1:37266->192.168.39.28:22: read: connection reset by peer
	W1217 11:16:00.808332 1350845 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:37284->192.168.39.28:22: read: connection reset by peer
	I1217 11:16:00.808368 1350845 retry.go:31] will retry after 197.5272ms: ssh: handshake failed: read tcp 192.168.39.1:37284->192.168.39.28:22: read: connection reset by peer
	I1217 11:16:01.316212 1350845 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 11:16:01.316254 1350845 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 11:16:01.317287 1350845 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 11:16:01.317305 1350845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 11:16:01.321082 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 11:16:01.327792 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 11:16:01.327820 1350845 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 11:16:01.368636 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 11:16:01.371462 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 11:16:01.400566 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 11:16:01.405126 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 11:16:01.406779 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 11:16:01.510859 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 11:16:01.511125 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 11:16:01.583201 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 11:16:01.615683 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 11:16:01.615720 1350845 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 11:16:01.791361 1350845 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 11:16:01.791387 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 11:16:01.915513 1350845 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 11:16:01.915537 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 11:16:01.943003 1350845 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 11:16:01.943038 1350845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 11:16:02.058501 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 11:16:02.058540 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 11:16:02.134291 1350845 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.795102297s)
	I1217 11:16:02.134358 1350845 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.791152272s)
	I1217 11:16:02.134449 1350845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 11:16:02.134493 1350845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 11:16:02.140834 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 11:16:02.203927 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 11:16:02.203964 1350845 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 11:16:02.297250 1350845 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 11:16:02.297288 1350845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 11:16:02.340129 1350845 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 11:16:02.340159 1350845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 11:16:02.344606 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 11:16:02.437144 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 11:16:02.437177 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 11:16:02.502765 1350845 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 11:16:02.502800 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 11:16:02.615967 1350845 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 11:16:02.616026 1350845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 11:16:02.644799 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 11:16:02.644847 1350845 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 11:16:02.805794 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 11:16:02.805836 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 11:16:02.892541 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 11:16:03.030936 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 11:16:03.042085 1350845 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 11:16:03.042126 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 11:16:03.219517 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 11:16:03.219558 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 11:16:03.355025 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.033901875s)
	I1217 11:16:03.461505 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 11:16:03.557899 1350845 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 11:16:03.557930 1350845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 11:16:03.935096 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 11:16:03.935126 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 11:16:04.201973 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 11:16:04.202013 1350845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 11:16:04.501645 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 11:16:04.501678 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 11:16:04.866099 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 11:16:04.866135 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 11:16:05.324589 1350845 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 11:16:05.324617 1350845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 11:16:05.569817 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 11:16:06.724055 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.355381318s)
	I1217 11:16:06.724144 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.352648166s)
	I1217 11:16:06.724229 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.319077403s)
	I1217 11:16:06.724283 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.323691313s)
	I1217 11:16:06.724368 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.317560985s)
	I1217 11:16:06.724440 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.213287726s)
	I1217 11:16:07.772293 1350845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 11:16:07.775596 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:07.776100 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:07.776143 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:07.776333 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:08.153076 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.642170136s)
	I1217 11:16:08.153132 1350845 addons.go:495] Verifying addon ingress=true in "addons-410268"
	I1217 11:16:08.153196 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.569954536s)
	I1217 11:16:08.153250 1350845 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.018779215s)
	I1217 11:16:08.153405 1350845 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.018885637s)
	I1217 11:16:08.153436 1350845 start.go:1013] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1217 11:16:08.153477 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.012588453s)
	I1217 11:16:08.153554 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.808910052s)
	I1217 11:16:08.153581 1350845 addons.go:495] Verifying addon registry=true in "addons-410268"
	I1217 11:16:08.153733 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.122756962s)
	I1217 11:16:08.153774 1350845 addons.go:495] Verifying addon metrics-server=true in "addons-410268"
	I1217 11:16:08.153633 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.261054783s)
	I1217 11:16:08.154290 1350845 node_ready.go:35] waiting up to 6m0s for node "addons-410268" to be "Ready" ...
	I1217 11:16:08.155148 1350845 out.go:179] * Verifying registry addon...
	I1217 11:16:08.155157 1350845 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-410268 service yakd-dashboard -n yakd-dashboard
	
	I1217 11:16:08.155148 1350845 out.go:179] * Verifying ingress addon...
	I1217 11:16:08.157159 1350845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 11:16:08.157362 1350845 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 11:16:08.192726 1350845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 11:16:08.211858 1350845 node_ready.go:49] node "addons-410268" is "Ready"
	I1217 11:16:08.211890 1350845 node_ready.go:38] duration metric: took 57.576108ms for node "addons-410268" to be "Ready" ...
	I1217 11:16:08.211910 1350845 api_server.go:52] waiting for apiserver process to appear ...
	I1217 11:16:08.211973 1350845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:16:08.237579 1350845 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 11:16:08.237596 1350845 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 11:16:08.237603 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:08.237611 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:08.373486 1350845 addons.go:239] Setting addon gcp-auth=true in "addons-410268"
	I1217 11:16:08.373555 1350845 host.go:66] Checking if "addons-410268" exists ...
	I1217 11:16:08.375819 1350845 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 11:16:08.378843 1350845 main.go:143] libmachine: domain addons-410268 has defined MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:08.379398 1350845 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:35:b5:14", ip: ""} in network mk-addons-410268: {Iface:virbr1 ExpiryTime:2025-12-17 12:15:35 +0000 UTC Type:0 Mac:52:54:00:35:b5:14 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:addons-410268 Clientid:01:52:54:00:35:b5:14}
	I1217 11:16:08.379437 1350845 main.go:143] libmachine: domain addons-410268 has defined IP address 192.168.39.28 and MAC address 52:54:00:35:b5:14 in network mk-addons-410268
	I1217 11:16:08.379645 1350845 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/addons-410268/id_rsa Username:docker}
	I1217 11:16:08.789395 1350845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-410268" context rescaled to 1 replicas
	I1217 11:16:08.796855 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:08.802172 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:09.048869 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.587319964s)
	W1217 11:16:09.048945 1350845 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 11:16:09.048994 1350845 retry.go:31] will retry after 254.816128ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 11:16:09.217737 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:09.217999 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:09.304709 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 11:16:09.674715 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:09.675113 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:10.164483 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:10.164559 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:10.169221 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.599348398s)
	I1217 11:16:10.169247 1350845 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.957246506s)
	I1217 11:16:10.169266 1350845 api_server.go:72] duration metric: took 9.829991807s to wait for apiserver process to appear ...
	I1217 11:16:10.169263 1350845 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-410268"
	I1217 11:16:10.169274 1350845 api_server.go:88] waiting for apiserver healthz status ...
	I1217 11:16:10.169295 1350845 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I1217 11:16:10.169316 1350845 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.793467482s)
	I1217 11:16:10.170896 1350845 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 11:16:10.170908 1350845 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 11:16:10.172141 1350845 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 11:16:10.172723 1350845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 11:16:10.173213 1350845 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 11:16:10.173235 1350845 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 11:16:10.190414 1350845 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 11:16:10.190439 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:10.199394 1350845 api_server.go:279] https://192.168.39.28:8443/healthz returned 200:
	ok
	I1217 11:16:10.217685 1350845 api_server.go:141] control plane version: v1.34.3
	I1217 11:16:10.217721 1350845 api_server.go:131] duration metric: took 48.440983ms to wait for apiserver health ...
	I1217 11:16:10.217731 1350845 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 11:16:10.269581 1350845 system_pods.go:59] 20 kube-system pods found
	I1217 11:16:10.269621 1350845 system_pods.go:61] "amd-gpu-device-plugin-7vz7s" [d5f6f486-f31c-465a-bbac-0cabfeabfa57] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 11:16:10.269630 1350845 system_pods.go:61] "coredns-66bc5c9577-f9dfv" [b3c65235-f139-4f33-adef-fc6ef1ccb253] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:16:10.269638 1350845 system_pods.go:61] "coredns-66bc5c9577-svfjn" [e8aebe9d-3a17-487e-be9b-4e688cd2b8bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:16:10.269644 1350845 system_pods.go:61] "csi-hostpath-attacher-0" [67dac145-8016-43d9-913c-e078ba2ba440] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 11:16:10.269650 1350845 system_pods.go:61] "csi-hostpath-resizer-0" [70902375-0f7e-4cac-902c-bfb8dc1b0407] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 11:16:10.269668 1350845 system_pods.go:61] "csi-hostpathplugin-674kp" [8d5e02ac-f5bd-46e2-8ddb-18cdde14e1bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 11:16:10.269674 1350845 system_pods.go:61] "etcd-addons-410268" [05d85a95-6449-4397-adfb-9a20407a423a] Running
	I1217 11:16:10.269679 1350845 system_pods.go:61] "kube-apiserver-addons-410268" [13816250-d3c5-4d81-ad74-ffe9cb3ddbc5] Running
	I1217 11:16:10.269687 1350845 system_pods.go:61] "kube-controller-manager-addons-410268" [5aa78d38-35e1-472f-8299-cfc242fca369] Running
	I1217 11:16:10.269696 1350845 system_pods.go:61] "kube-ingress-dns-minikube" [6073097f-5ea5-4564-9be4-35f9191742dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 11:16:10.269701 1350845 system_pods.go:61] "kube-proxy-6pdv6" [c6d1e053-5420-4db6-a1f6-daab3034e85c] Running
	I1217 11:16:10.269722 1350845 system_pods.go:61] "kube-scheduler-addons-410268" [2401fedc-c4f4-48eb-9807-2abc585513d0] Running
	I1217 11:16:10.269730 1350845 system_pods.go:61] "metrics-server-85b7d694d7-wzdd7" [45eadf4d-9bab-4bbf-88c7-99c4433a113d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 11:16:10.269741 1350845 system_pods.go:61] "nvidia-device-plugin-daemonset-5czqh" [22222c18-08cb-4be5-93fc-4e2715120b95] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 11:16:10.269751 1350845 system_pods.go:61] "registry-6b586f9694-zzpqs" [5234c3bf-e000-4d51-80db-779c52aba6bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 11:16:10.269756 1350845 system_pods.go:61] "registry-creds-764b6fb674-4z6q4" [eb27db8e-73bb-47b3-b506-a5be0bb9dbdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 11:16:10.269761 1350845 system_pods.go:61] "registry-proxy-tgq9f" [acc44f29-6589-4709-855b-7ecb669c57b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 11:16:10.269766 1350845 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4d5hl" [47a9cd1f-a9eb-4de4-abf7-4a920d621e74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:10.269771 1350845 system_pods.go:61] "snapshot-controller-7d9fbc56b8-cgr4k" [54fa31c8-1652-4949-a486-f0f561074620] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:10.269775 1350845 system_pods.go:61] "storage-provisioner" [c32b2c11-7a98-48ba-89d5-3a5e581c171b] Running
	I1217 11:16:10.269782 1350845 system_pods.go:74] duration metric: took 52.044895ms to wait for pod list to return data ...
	I1217 11:16:10.269792 1350845 default_sa.go:34] waiting for default service account to be created ...
	I1217 11:16:10.275299 1350845 default_sa.go:45] found service account: "default"
	I1217 11:16:10.275318 1350845 default_sa.go:55] duration metric: took 5.520735ms for default service account to be created ...
	I1217 11:16:10.275326 1350845 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 11:16:10.280468 1350845 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 11:16:10.280491 1350845 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 11:16:10.280935 1350845 system_pods.go:86] 20 kube-system pods found
	I1217 11:16:10.280970 1350845 system_pods.go:89] "amd-gpu-device-plugin-7vz7s" [d5f6f486-f31c-465a-bbac-0cabfeabfa57] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 11:16:10.280976 1350845 system_pods.go:89] "coredns-66bc5c9577-f9dfv" [b3c65235-f139-4f33-adef-fc6ef1ccb253] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:16:10.281007 1350845 system_pods.go:89] "coredns-66bc5c9577-svfjn" [e8aebe9d-3a17-487e-be9b-4e688cd2b8bd] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 11:16:10.281015 1350845 system_pods.go:89] "csi-hostpath-attacher-0" [67dac145-8016-43d9-913c-e078ba2ba440] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 11:16:10.281023 1350845 system_pods.go:89] "csi-hostpath-resizer-0" [70902375-0f7e-4cac-902c-bfb8dc1b0407] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 11:16:10.281032 1350845 system_pods.go:89] "csi-hostpathplugin-674kp" [8d5e02ac-f5bd-46e2-8ddb-18cdde14e1bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 11:16:10.281042 1350845 system_pods.go:89] "etcd-addons-410268" [05d85a95-6449-4397-adfb-9a20407a423a] Running
	I1217 11:16:10.281049 1350845 system_pods.go:89] "kube-apiserver-addons-410268" [13816250-d3c5-4d81-ad74-ffe9cb3ddbc5] Running
	I1217 11:16:10.281054 1350845 system_pods.go:89] "kube-controller-manager-addons-410268" [5aa78d38-35e1-472f-8299-cfc242fca369] Running
	I1217 11:16:10.281061 1350845 system_pods.go:89] "kube-ingress-dns-minikube" [6073097f-5ea5-4564-9be4-35f9191742dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 11:16:10.281066 1350845 system_pods.go:89] "kube-proxy-6pdv6" [c6d1e053-5420-4db6-a1f6-daab3034e85c] Running
	I1217 11:16:10.281070 1350845 system_pods.go:89] "kube-scheduler-addons-410268" [2401fedc-c4f4-48eb-9807-2abc585513d0] Running
	I1217 11:16:10.281075 1350845 system_pods.go:89] "metrics-server-85b7d694d7-wzdd7" [45eadf4d-9bab-4bbf-88c7-99c4433a113d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 11:16:10.281089 1350845 system_pods.go:89] "nvidia-device-plugin-daemonset-5czqh" [22222c18-08cb-4be5-93fc-4e2715120b95] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 11:16:10.281094 1350845 system_pods.go:89] "registry-6b586f9694-zzpqs" [5234c3bf-e000-4d51-80db-779c52aba6bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 11:16:10.281099 1350845 system_pods.go:89] "registry-creds-764b6fb674-4z6q4" [eb27db8e-73bb-47b3-b506-a5be0bb9dbdb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 11:16:10.281103 1350845 system_pods.go:89] "registry-proxy-tgq9f" [acc44f29-6589-4709-855b-7ecb669c57b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 11:16:10.281109 1350845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4d5hl" [47a9cd1f-a9eb-4de4-abf7-4a920d621e74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:10.281118 1350845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-cgr4k" [54fa31c8-1652-4949-a486-f0f561074620] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 11:16:10.281124 1350845 system_pods.go:89] "storage-provisioner" [c32b2c11-7a98-48ba-89d5-3a5e581c171b] Running
	I1217 11:16:10.281134 1350845 system_pods.go:126] duration metric: took 5.801932ms to wait for k8s-apps to be running ...
	I1217 11:16:10.281145 1350845 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 11:16:10.281197 1350845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:16:10.369790 1350845 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 11:16:10.369815 1350845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 11:16:10.421214 1350845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 11:16:10.663238 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:10.663813 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:10.677340 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:10.991243 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.686475947s)
	I1217 11:16:10.991280 1350845 system_svc.go:56] duration metric: took 710.125534ms WaitForService to wait for kubelet
	I1217 11:16:10.991310 1350845 kubeadm.go:587] duration metric: took 10.652032407s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 11:16:10.991331 1350845 node_conditions.go:102] verifying NodePressure condition ...
	I1217 11:16:10.997161 1350845 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 11:16:10.997184 1350845 node_conditions.go:123] node cpu capacity is 2
	I1217 11:16:10.997205 1350845 node_conditions.go:105] duration metric: took 5.869128ms to run NodePressure ...
	I1217 11:16:10.997219 1350845 start.go:242] waiting for startup goroutines ...
	I1217 11:16:11.161706 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:11.163637 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:11.176522 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:11.566474 1350845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.145204213s)
	I1217 11:16:11.567507 1350845 addons.go:495] Verifying addon gcp-auth=true in "addons-410268"
	I1217 11:16:11.569240 1350845 out.go:179] * Verifying gcp-auth addon...
	I1217 11:16:11.570879 1350845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 11:16:11.596054 1350845 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 11:16:11.596073 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:11.671291 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:11.673706 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:11.684244 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:12.077315 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:12.178555 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:12.178936 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:12.181602 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:12.577539 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:12.669030 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:12.669528 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:12.679866 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:13.075174 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:13.178731 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:13.179395 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:13.180030 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:13.576924 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:13.665890 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:13.666082 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:13.678398 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:14.078560 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:14.164144 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:14.164505 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:14.177574 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:14.576234 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:14.660840 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:14.660849 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:14.675762 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:15.075839 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:15.175893 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:15.176798 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:15.178011 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:15.574251 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:15.660751 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:15.661469 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:15.676483 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:16.075670 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:16.160596 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:16.162585 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:16.176198 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:16.574716 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:16.661463 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:16.662049 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:16.676136 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:17.074579 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:17.162321 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:17.162372 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:17.177632 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:17.577901 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:17.665877 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:17.666282 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:17.677730 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:18.075105 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:18.161380 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:18.162494 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:18.175803 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:18.574399 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:18.661163 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:18.662266 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:18.675815 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:19.074550 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:19.161438 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:19.163965 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:19.176343 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:19.575028 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:19.661552 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:19.661809 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:19.676491 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:20.075886 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:20.160831 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:20.161378 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:20.177384 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:20.574922 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:20.663506 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:20.663561 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:20.677561 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:21.074921 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:21.161954 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:21.161995 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:21.176474 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:21.576348 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:21.660914 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:21.661526 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:21.677906 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:22.073886 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:22.163050 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:22.163186 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:22.177096 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:22.575354 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:22.661395 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:22.662419 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:22.676018 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:23.075081 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:23.165541 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:23.167598 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:23.177059 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:23.578724 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:23.661590 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:23.663583 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:23.677915 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:24.075404 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:24.160554 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:24.160694 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:24.177424 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:24.574935 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:24.661573 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:24.661741 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:24.676606 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:25.075326 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:25.160646 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:25.161501 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:25.177061 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:25.578481 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:25.660834 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:25.660898 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:25.676665 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:26.075184 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:26.161150 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:26.161637 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:26.176309 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:26.574513 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:26.662848 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:26.663088 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:26.677420 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:27.075021 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:27.165418 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:27.168212 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:27.177490 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:27.578975 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:27.679301 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:27.679322 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:27.679431 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:28.074996 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:28.162680 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:28.162778 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:28.176862 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:28.584338 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:28.661447 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:28.661575 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:28.676474 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:29.075446 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:29.160533 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:29.160637 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:29.176804 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:29.575291 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:29.660666 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:29.661377 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:29.676747 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:30.075876 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:30.163324 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:30.164346 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:30.176842 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:30.574940 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:30.661784 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:30.663890 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:30.675562 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:31.075929 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:31.166766 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:31.166801 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:31.177511 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:31.576459 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:31.662025 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:31.663438 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:31.676271 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:32.073952 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:32.167131 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:32.168569 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:32.176145 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:32.575025 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:32.662498 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:32.662590 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:32.677052 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:33.075072 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:33.167106 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:33.167615 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:33.178785 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:33.610737 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:33.664647 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:33.665518 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:33.678347 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:34.233974 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:34.234835 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:34.235062 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:34.235131 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:34.577251 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:34.662801 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:34.662918 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:34.678374 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:35.074819 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:35.163517 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:35.164629 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:35.178234 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:35.576414 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:35.682465 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:35.685868 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:35.686399 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:36.074440 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:36.164533 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:36.164834 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:36.176315 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:36.574640 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:36.661109 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:36.661180 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:36.675787 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:37.074602 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:37.162850 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:37.163685 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:37.175928 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:37.574570 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:37.660699 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:37.660733 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:37.676394 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:38.074553 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:38.160993 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:38.161106 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:38.175660 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:38.573792 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:38.661687 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:38.662344 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:38.675881 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:39.074461 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:39.162521 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:39.162522 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:39.179335 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:39.577466 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:39.661678 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:39.665473 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:39.677752 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:40.076698 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:40.161626 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:40.162530 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:40.178115 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:40.574738 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:40.662585 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:40.664496 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:40.676195 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:41.075348 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:41.160064 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:41.160600 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:41.175858 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:41.573977 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:41.661298 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 11:16:41.661412 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:41.677233 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:42.074642 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:42.162370 1350845 kapi.go:107] duration metric: took 34.005210017s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 11:16:42.162998 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:42.176207 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:42.574973 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:42.662772 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:42.677621 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:43.076321 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:43.166421 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:43.180348 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:43.580191 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:43.661835 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:43.678890 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:44.075709 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:44.161206 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:44.177565 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:44.575449 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:44.660314 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:44.677310 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:45.075559 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:45.162555 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:45.177848 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:45.576394 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:45.677122 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:45.677119 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:46.077345 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:46.163867 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:46.178025 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:46.576449 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:46.662086 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:46.677836 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:47.074518 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:47.160762 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:47.181093 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:47.586394 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:47.663437 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:47.676129 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:48.077828 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:48.176246 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:48.178040 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:48.573888 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:48.661534 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:48.676907 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:49.073976 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:49.162150 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:49.176845 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:49.573961 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:49.660950 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:49.675356 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:50.086154 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:50.186089 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:50.186114 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:50.575854 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:50.661334 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:50.677634 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:51.075909 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:51.178155 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:51.178911 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:51.574954 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:51.661229 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:51.676624 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:52.075849 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:52.165840 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:52.178754 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:52.755061 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:52.755262 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:52.755299 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:53.075329 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:53.177445 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:53.178367 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:53.574856 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:53.661045 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:53.675780 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:54.075052 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:54.162506 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:54.177657 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:54.575537 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:54.660491 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:54.676454 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:55.075631 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:55.161566 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:55.178815 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:55.579530 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:55.663390 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:55.676248 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:56.077402 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:56.164871 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:56.177693 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:56.575714 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:56.669281 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:56.678256 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:57.075612 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:57.162041 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:57.176334 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:57.574849 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:57.663232 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:57.678319 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:58.078281 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:58.162125 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:58.176699 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:58.574529 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:58.660833 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:58.676528 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:59.075836 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:59.162431 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:59.178172 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:16:59.576102 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:16:59.662040 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:16:59.676834 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:00.074342 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:00.177468 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:00.178525 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:00.573931 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:00.663701 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:00.676638 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:01.074800 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:01.174581 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:01.181723 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:01.575299 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:01.675065 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:01.676728 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:02.077280 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:02.168342 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:02.178186 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:02.576385 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:02.662754 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:02.678905 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:03.075474 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:03.161888 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:03.177390 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:03.575756 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:03.661305 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:03.676587 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:04.077570 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:04.177739 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:04.178190 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:04.576839 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:04.661687 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:04.677968 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:05.075051 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:05.163297 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:05.177547 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:05.676519 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:05.676584 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:05.678259 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:06.078746 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:06.178297 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:06.178452 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:06.577539 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:06.664457 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:06.678036 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:07.074614 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:07.175610 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:07.177103 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:07.573576 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:07.662118 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:07.678306 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:08.074760 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:08.161485 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:08.177836 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:08.574307 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:08.676338 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:08.678243 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:09.077126 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:09.168293 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:09.179534 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:09.575570 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:09.672829 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:09.676325 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:10.074736 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:10.161975 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:10.176329 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:10.575793 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:10.664234 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:10.677141 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:11.076543 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:11.163544 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:11.176085 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:11.574610 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:11.661143 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:11.676147 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:12.078429 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:12.170668 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:12.178113 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:12.575626 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:12.661665 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:12.677456 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:13.077541 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:13.178912 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:13.178960 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:13.574944 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:13.665056 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:13.676144 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 11:17:14.078386 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:14.184806 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:14.185706 1350845 kapi.go:107] duration metric: took 1m4.012983057s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 11:17:14.574367 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:14.660347 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:15.222334 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:15.222566 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:15.577902 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:15.665571 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:16.079034 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:16.163031 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:16.576381 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:16.667487 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:17.075100 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:17.162207 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:17.575187 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:17.663282 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:18.253430 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:18.254173 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:18.575684 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:18.661028 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:19.076688 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:19.161392 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:19.574812 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:19.660762 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:20.076261 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:20.177133 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:20.575070 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:20.661318 1350845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 11:17:21.075719 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:21.161434 1350845 kapi.go:107] duration metric: took 1m13.004075369s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 11:17:21.575140 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:22.075088 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:22.576460 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:23.074098 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:23.576784 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:24.143078 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:24.575020 1350845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 11:17:25.075767 1350845 kapi.go:107] duration metric: took 1m13.504882856s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 11:17:25.077489 1350845 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-410268 cluster.
	I1217 11:17:25.078739 1350845 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 11:17:25.080188 1350845 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 11:17:25.082032 1350845 out.go:179] * Enabled addons: registry-creds, inspektor-gadget, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, default-storageclass, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1217 11:17:25.083284 1350845 addons.go:530] duration metric: took 1m24.744121732s for enable addons: enabled=[registry-creds inspektor-gadget cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns default-storageclass amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1217 11:17:25.083343 1350845 start.go:247] waiting for cluster config update ...
	I1217 11:17:25.083377 1350845 start.go:256] writing updated cluster config ...
	I1217 11:17:25.083669 1350845 ssh_runner.go:195] Run: rm -f paused
	I1217 11:17:25.089274 1350845 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:17:25.093134 1350845 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f9dfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:25.098552 1350845 pod_ready.go:94] pod "coredns-66bc5c9577-f9dfv" is "Ready"
	I1217 11:17:25.098576 1350845 pod_ready.go:86] duration metric: took 5.421914ms for pod "coredns-66bc5c9577-f9dfv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:25.101079 1350845 pod_ready.go:83] waiting for pod "etcd-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:25.106432 1350845 pod_ready.go:94] pod "etcd-addons-410268" is "Ready"
	I1217 11:17:25.106454 1350845 pod_ready.go:86] duration metric: took 5.356623ms for pod "etcd-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:25.108771 1350845 pod_ready.go:83] waiting for pod "kube-apiserver-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:25.113893 1350845 pod_ready.go:94] pod "kube-apiserver-addons-410268" is "Ready"
	I1217 11:17:25.113911 1350845 pod_ready.go:86] duration metric: took 5.117842ms for pod "kube-apiserver-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:25.116174 1350845 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:25.493720 1350845 pod_ready.go:94] pod "kube-controller-manager-addons-410268" is "Ready"
	I1217 11:17:25.493753 1350845 pod_ready.go:86] duration metric: took 377.552241ms for pod "kube-controller-manager-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:25.694964 1350845 pod_ready.go:83] waiting for pod "kube-proxy-6pdv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:26.093391 1350845 pod_ready.go:94] pod "kube-proxy-6pdv6" is "Ready"
	I1217 11:17:26.093422 1350845 pod_ready.go:86] duration metric: took 398.410611ms for pod "kube-proxy-6pdv6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:26.293825 1350845 pod_ready.go:83] waiting for pod "kube-scheduler-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:26.693756 1350845 pod_ready.go:94] pod "kube-scheduler-addons-410268" is "Ready"
	I1217 11:17:26.693783 1350845 pod_ready.go:86] duration metric: took 399.902092ms for pod "kube-scheduler-addons-410268" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 11:17:26.693797 1350845 pod_ready.go:40] duration metric: took 1.604488519s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 11:17:26.741152 1350845 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 11:17:26.742904 1350845 out.go:179] * Done! kubectl is now configured to use "addons-410268" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.882822712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b298076f-cc1d-4195-b96d-0d3f9984a187 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.882907843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b298076f-cc1d-4195-b96d-0d3f9984a187 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.883613635Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3388f744eafd79eead0b5cf45f82b2bb84d2d06d6d7e4a006bb805f6ece193af,PodSandboxId:e6bd9f288ebf4cc56da58c245cc18922df0dd7178151de1119656ef662963808,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765970279763976133,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8c813f3-2dd2-444d-88d8-fe297f907413,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a3d3f9b57387f33a36c34182b887ebe1682722b4962431559e27be67059c84,PodSandboxId:89babac933f5aa295026f142bf82dbc55a8133f3f75c64ecfc188492117a4d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765970251053857675,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89b289cf-cd57-4583-9745-2ff3ad4a62ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3bd861b00947ea747cba080e36feb58d620667bd535a3e091dbcc8119f2f8d,PodSandboxId:a5ad2d4e669ae7633426a272a372a1ba1aeb99b745f797d026ce3ca3157ed186,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765970239840773449,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnptk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efa47144-c3a7-4842-b47a-dccdfad29fa0,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ce9723c594e95d3af7ff756ea18c604a5a5c85238726ed7f208eb8ca1fe9521a,PodSandboxId:ab1d3f4303d9ff06239d666f537ce2ad700c6da2c32df914e264ea4c0b557ce3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212994447146,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xcp88,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ea1bb28-6eb2-4e5f-a0ab-ae4ac81e953d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe1320aa0577943f2ce261f25568a27312decbe658920f895186345ff229969,PodSandboxId:d156e490be750c3b9a5e337893c5b3a7e7bc1615b4f159398ba8ebdb1524e7b0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212875683522,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nfwbf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d3c5178c-0e1c-404b-a454-cd0502cb0ba6,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e71f626582a418e0ed5c8719bfd643ff314b7611415afe559dcc3f7323bb80b,PodSandboxId:90ece57d709f4ccc566a2741d429cf0bae90a9c669f1b48a5cb1fb087ae69778,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765970196708651331,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6073097f-5ea5-4564-9be4-35f9191742dc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6935f264be567197d91ea3539a21b4ad960e18fce8f0cd2fd8a064aee0962b,PodSandboxId:0ecb8b919ad23721d145afcc276bf240c33e08867b8b11afbf7cc21919836c35,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765970177766563755,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vz7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f6f486-f31c-465a-bbac-0cabfeabfa57,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44,PodSandboxId:65b713e60d6e6ace420aa097c21c09e236417669978aa500826c9f51e1129455,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765970167824007229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32b2c11-7a98-48ba-89d5-3a5e581c171b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8,PodSandboxId:33825f2fd286da9301fc3ba0fcc90cbd1238b56b148a5f9e3256e0dbe31b2547,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765970161735737258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f9dfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c65235-f139-4f33-adef-fc6ef1ccb253,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0,PodSandboxId:40ea1b3a1351db4bb464ffe1eb4ebcab02d218ef66454fb944ac5e8fc0d98ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765970160807960000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6pdv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d1e053-5420-4db6-a1f6-daab3034e85c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801,PodSandboxId:1ab816366f67f435cc5cc75420205135cbead4b6941122b61878c2debaca3b89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765970149189982033,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e8ce73c3cc79b63688be36508c3f66,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab,PodSandboxId:daa48fe629c484b123b35d228ebb38cf0bae01d253e2de4d8580ac6bc280920b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765970149213819944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfbacbbf633ed0be3d9c6bc9784a200,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524,PodSandboxId:f245ba360082c2736323673a75934b41e50a424d197fbda93c00c99e5ae0e67e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765970149170240476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08453d2f2433d0eaf792f
305f65cd5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452,PodSandboxId:66b4583fa8412d7d376eb513c5676ec785a5817b40b8d53871ef9d12bbe6a8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765970149143363619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 746f2826e5ab144162efd3359f041e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b298076f-cc1d-4195-b96d-0d3f9984a187 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.918953278Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f657c6d0-0d8e-4c07-814c-9b631f5a81fc name=/runtime.v1.RuntimeService/Version
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.919040920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f657c6d0-0d8e-4c07-814c-9b631f5a81fc name=/runtime.v1.RuntimeService/Version
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.920779869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95387715-0800-4306-9492-9ed0e9784d36 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.922143504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765970421922117204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95387715-0800-4306-9492-9ed0e9784d36 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.923111372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dec9b7b0-8957-4da5-9e53-980522d83e56 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.923210252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dec9b7b0-8957-4da5-9e53-980522d83e56 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.923513517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3388f744eafd79eead0b5cf45f82b2bb84d2d06d6d7e4a006bb805f6ece193af,PodSandboxId:e6bd9f288ebf4cc56da58c245cc18922df0dd7178151de1119656ef662963808,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765970279763976133,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8c813f3-2dd2-444d-88d8-fe297f907413,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a3d3f9b57387f33a36c34182b887ebe1682722b4962431559e27be67059c84,PodSandboxId:89babac933f5aa295026f142bf82dbc55a8133f3f75c64ecfc188492117a4d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765970251053857675,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89b289cf-cd57-4583-9745-2ff3ad4a62ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3bd861b00947ea747cba080e36feb58d620667bd535a3e091dbcc8119f2f8d,PodSandboxId:a5ad2d4e669ae7633426a272a372a1ba1aeb99b745f797d026ce3ca3157ed186,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765970239840773449,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnptk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efa47144-c3a7-4842-b47a-dccdfad29fa0,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ce9723c594e95d3af7ff756ea18c604a5a5c85238726ed7f208eb8ca1fe9521a,PodSandboxId:ab1d3f4303d9ff06239d666f537ce2ad700c6da2c32df914e264ea4c0b557ce3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212994447146,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xcp88,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ea1bb28-6eb2-4e5f-a0ab-ae4ac81e953d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe1320aa0577943f2ce261f25568a27312decbe658920f895186345ff229969,PodSandboxId:d156e490be750c3b9a5e337893c5b3a7e7bc1615b4f159398ba8ebdb1524e7b0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212875683522,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nfwbf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d3c5178c-0e1c-404b-a454-cd0502cb0ba6,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e71f626582a418e0ed5c8719bfd643ff314b7611415afe559dcc3f7323bb80b,PodSandboxId:90ece57d709f4ccc566a2741d429cf0bae90a9c669f1b48a5cb1fb087ae69778,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765970196708651331,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6073097f-5ea5-4564-9be4-35f9191742dc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6935f264be567197d91ea3539a21b4ad960e18fce8f0cd2fd8a064aee0962b,PodSandboxId:0ecb8b919ad23721d145afcc276bf240c33e08867b8b11afbf7cc21919836c35,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765970177766563755,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vz7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f6f486-f31c-465a-bbac-0cabfeabfa57,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44,PodSandboxId:65b713e60d6e6ace420aa097c21c09e236417669978aa500826c9f51e1129455,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765970167824007229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32b2c11-7a98-48ba-89d5-3a5e581c171b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8,PodSandboxId:33825f2fd286da9301fc3ba0fcc90cbd1238b56b148a5f9e3256e0dbe31b2547,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765970161735737258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f9dfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c65235-f139-4f33-adef-fc6ef1ccb253,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0,PodSandboxId:40ea1b3a1351db4bb464ffe1eb4ebcab02d218ef66454fb944ac5e8fc0d98ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765970160807960000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6pdv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d1e053-5420-4db6-a1f6-daab3034e85c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801,PodSandboxId:1ab816366f67f435cc5cc75420205135cbead4b6941122b61878c2debaca3b89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765970149189982033,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e8ce73c3cc79b63688be36508c3f66,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab,PodSandboxId:daa48fe629c484b123b35d228ebb38cf0bae01d253e2de4d8580ac6bc280920b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765970149213819944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfbacbbf633ed0be3d9c6bc9784a200,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524,PodSandboxId:f245ba360082c2736323673a75934b41e50a424d197fbda93c00c99e5ae0e67e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765970149170240476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08453d2f2433d0eaf792f
305f65cd5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452,PodSandboxId:66b4583fa8412d7d376eb513c5676ec785a5817b40b8d53871ef9d12bbe6a8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765970149143363619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 746f2826e5ab144162efd3359f041e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dec9b7b0-8957-4da5-9e53-980522d83e56 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.957892862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2cc50bab-88a1-4189-bb7b-2381ad6991c9 name=/runtime.v1.RuntimeService/Version
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.957965275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2cc50bab-88a1-4189-bb7b-2381ad6991c9 name=/runtime.v1.RuntimeService/Version
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.959293776Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4266b68c-eac9-4188-925d-b0d8d9cafa7e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.960680560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765970421960651765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4266b68c-eac9-4188-925d-b0d8d9cafa7e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.961588702Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e97439e2-14e8-4422-bd79-6064a4097188 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.961663770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e97439e2-14e8-4422-bd79-6064a4097188 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.961940620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3388f744eafd79eead0b5cf45f82b2bb84d2d06d6d7e4a006bb805f6ece193af,PodSandboxId:e6bd9f288ebf4cc56da58c245cc18922df0dd7178151de1119656ef662963808,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765970279763976133,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8c813f3-2dd2-444d-88d8-fe297f907413,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a3d3f9b57387f33a36c34182b887ebe1682722b4962431559e27be67059c84,PodSandboxId:89babac933f5aa295026f142bf82dbc55a8133f3f75c64ecfc188492117a4d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765970251053857675,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89b289cf-cd57-4583-9745-2ff3ad4a62ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3bd861b00947ea747cba080e36feb58d620667bd535a3e091dbcc8119f2f8d,PodSandboxId:a5ad2d4e669ae7633426a272a372a1ba1aeb99b745f797d026ce3ca3157ed186,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765970239840773449,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnptk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efa47144-c3a7-4842-b47a-dccdfad29fa0,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ce9723c594e95d3af7ff756ea18c604a5a5c85238726ed7f208eb8ca1fe9521a,PodSandboxId:ab1d3f4303d9ff06239d666f537ce2ad700c6da2c32df914e264ea4c0b557ce3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212994447146,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xcp88,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ea1bb28-6eb2-4e5f-a0ab-ae4ac81e953d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe1320aa0577943f2ce261f25568a27312decbe658920f895186345ff229969,PodSandboxId:d156e490be750c3b9a5e337893c5b3a7e7bc1615b4f159398ba8ebdb1524e7b0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212875683522,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nfwbf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d3c5178c-0e1c-404b-a454-cd0502cb0ba6,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e71f626582a418e0ed5c8719bfd643ff314b7611415afe559dcc3f7323bb80b,PodSandboxId:90ece57d709f4ccc566a2741d429cf0bae90a9c669f1b48a5cb1fb087ae69778,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765970196708651331,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6073097f-5ea5-4564-9be4-35f9191742dc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6935f264be567197d91ea3539a21b4ad960e18fce8f0cd2fd8a064aee0962b,PodSandboxId:0ecb8b919ad23721d145afcc276bf240c33e08867b8b11afbf7cc21919836c35,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765970177766563755,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vz7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f6f486-f31c-465a-bbac-0cabfeabfa57,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44,PodSandboxId:65b713e60d6e6ace420aa097c21c09e236417669978aa500826c9f51e1129455,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765970167824007229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32b2c11-7a98-48ba-89d5-3a5e581c171b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8,PodSandboxId:33825f2fd286da9301fc3ba0fcc90cbd1238b56b148a5f9e3256e0dbe31b2547,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765970161735737258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f9dfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c65235-f139-4f33-adef-fc6ef1ccb253,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0,PodSandboxId:40ea1b3a1351db4bb464ffe1eb4ebcab02d218ef66454fb944ac5e8fc0d98ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765970160807960000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6pdv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d1e053-5420-4db6-a1f6-daab3034e85c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801,PodSandboxId:1ab816366f67f435cc5cc75420205135cbead4b6941122b61878c2debaca3b89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765970149189982033,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e8ce73c3cc79b63688be36508c3f66,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab,PodSandboxId:daa48fe629c484b123b35d228ebb38cf0bae01d253e2de4d8580ac6bc280920b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765970149213819944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfbacbbf633ed0be3d9c6bc9784a200,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524,PodSandboxId:f245ba360082c2736323673a75934b41e50a424d197fbda93c00c99e5ae0e67e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765970149170240476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08453d2f2433d0eaf792f
305f65cd5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452,PodSandboxId:66b4583fa8412d7d376eb513c5676ec785a5817b40b8d53871ef9d12bbe6a8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765970149143363619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 746f2826e5ab144162efd3359f041e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e97439e2-14e8-4422-bd79-6064a4097188 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.993267981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfb848fc-e92c-44c6-b9d0-154674da823d name=/runtime.v1.RuntimeService/Version
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.993368064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfb848fc-e92c-44c6-b9d0-154674da823d name=/runtime.v1.RuntimeService/Version
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.994701129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70aaf2f8-d5b6-44b5-802d-352e2a12c445 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.995871740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765970421995848634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:551113,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70aaf2f8-d5b6-44b5-802d-352e2a12c445 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.996739659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9be81646-2396-4828-8f43-7788c88199be name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.996806392Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9be81646-2396-4828-8f43-7788c88199be name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:21 addons-410268 crio[813]: time="2025-12-17 11:20:21.997083935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3388f744eafd79eead0b5cf45f82b2bb84d2d06d6d7e4a006bb805f6ece193af,PodSandboxId:e6bd9f288ebf4cc56da58c245cc18922df0dd7178151de1119656ef662963808,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765970279763976133,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d8c813f3-2dd2-444d-88d8-fe297f907413,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a3d3f9b57387f33a36c34182b887ebe1682722b4962431559e27be67059c84,PodSandboxId:89babac933f5aa295026f142bf82dbc55a8133f3f75c64ecfc188492117a4d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765970251053857675,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 89b289cf-cd57-4583-9745-2ff3ad4a62ac,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b3bd861b00947ea747cba080e36feb58d620667bd535a3e091dbcc8119f2f8d,PodSandboxId:a5ad2d4e669ae7633426a272a372a1ba1aeb99b745f797d026ce3ca3157ed186,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765970239840773449,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-wnptk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efa47144-c3a7-4842-b47a-dccdfad29fa0,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ce9723c594e95d3af7ff756ea18c604a5a5c85238726ed7f208eb8ca1fe9521a,PodSandboxId:ab1d3f4303d9ff06239d666f537ce2ad700c6da2c32df914e264ea4c0b557ce3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212994447146,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xcp88,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0ea1bb28-6eb2-4e5f-a0ab-ae4ac81e953d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fe1320aa0577943f2ce261f25568a27312decbe658920f895186345ff229969,PodSandboxId:d156e490be750c3b9a5e337893c5b3a7e7bc1615b4f159398ba8ebdb1524e7b0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765970212875683522,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nfwbf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d3c5178c-0e1c-404b-a454-cd0502cb0ba6,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e71f626582a418e0ed5c8719bfd643ff314b7611415afe559dcc3f7323bb80b,PodSandboxId:90ece57d709f4ccc566a2741d429cf0bae90a9c669f1b48a5cb1fb087ae69778,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotation
s:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765970196708651331,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6073097f-5ea5-4564-9be4-35f9191742dc,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c6935f264be567197d91ea3539a21b4ad960e18fce8f0cd2fd8a064aee0962b,PodSandboxId:0ecb8b919ad23721d145afcc276bf240c33e08867b8b11afbf7cc21919836c35,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&I
mageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765970177766563755,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-7vz7s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5f6f486-f31c-465a-bbac-0cabfeabfa57,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44,PodSandboxId:65b713e60d6e6ace420aa097c21c09e236417669978aa500826c9f51e1129455,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765970167824007229,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c32b2c11-7a98-48ba-89d5-3a5e581c171b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8,PodSandboxId:33825f2fd286da9301fc3ba0fcc90cbd1238b56b148a5f9e3256e0dbe31b2547,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765970161735737258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-f9dfv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c65235-f139-4f33-adef-fc6ef1ccb253,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0,PodSandboxId:40ea1b3a1351db4bb464ffe1eb4ebcab02d218ef66454fb944ac5e8fc0d98ae6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765970160807960000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6pdv6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d1e053-5420-4db6-a1f6-daab3034e85c,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801,PodSandboxId:1ab816366f67f435cc5cc75420205135cbead4b6941122b61878c2debaca3b89,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765970149189982033,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e8ce73c3cc79b63688be36508c3f66,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab,PodSandboxId:daa48fe629c484b123b35d228ebb38cf0bae01d253e2de4d8580ac6bc280920b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,CreatedAt:1765970149213819944,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cfbacbbf633ed0be3d9c6bc9784a200,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-
port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524,PodSandboxId:f245ba360082c2736323673a75934b41e50a424d197fbda93c00c99e5ae0e67e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765970149170240476,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08453d2f2433d0eaf792f
305f65cd5f7,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452,PodSandboxId:66b4583fa8412d7d376eb513c5676ec785a5817b40b8d53871ef9d12bbe6a8c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765970149143363619,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-api
server-addons-410268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 746f2826e5ab144162efd3359f041e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9be81646-2396-4828-8f43-7788c88199be name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 11:20:22 addons-410268 crio[813]: time="2025-12-17 11:20:22.018608345Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3388f744eafd7       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   e6bd9f288ebf4       nginx                                       default
	e5a3d3f9b5738       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   89babac933f5a       busybox                                     default
	2b3bd861b0094       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   a5ad2d4e669ae       ingress-nginx-controller-85d4c799dd-wnptk   ingress-nginx
	ce9723c594e95       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              patch                     0                   ab1d3f4303d9f       ingress-nginx-admission-patch-xcp88         ingress-nginx
	1fe1320aa0577       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   d156e490be750       ingress-nginx-admission-create-nfwbf        ingress-nginx
	9e71f626582a4       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   90ece57d709f4       kube-ingress-dns-minikube                   kube-system
	7c6935f264be5       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   0ecb8b919ad23       amd-gpu-device-plugin-7vz7s                 kube-system
	2bae8cb5b2578       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   65b713e60d6e6       storage-provisioner                         kube-system
	02277316fac52       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   33825f2fd286d       coredns-66bc5c9577-f9dfv                    kube-system
	cb396bc0e2166       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                             4 minutes ago       Running             kube-proxy                0                   40ea1b3a1351d       kube-proxy-6pdv6                            kube-system
	de5475dd01a03       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                             4 minutes ago       Running             kube-scheduler            0                   daa48fe629c48       kube-scheduler-addons-410268                kube-system
	9c48b14c60d6a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   1ab816366f67f       etcd-addons-410268                          kube-system
	2c14e25e96ab6       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                             4 minutes ago       Running             kube-controller-manager   0                   f245ba360082c       kube-controller-manager-addons-410268       kube-system
	e56b63b5b38c5       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                             4 minutes ago       Running             kube-apiserver            0                   66b4583fa8412       kube-apiserver-addons-410268                kube-system
	
	
	==> coredns [02277316fac522833acac691e0e0cd2fe5d863294b5a9a6c9d4ce03fbcfd48f8] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.23:42695 - 32588 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000452699s
	[INFO] 10.244.0.23:36435 - 64179 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00013891s
	[INFO] 10.244.0.23:56108 - 42822 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000219564s
	[INFO] 10.244.0.23:57650 - 38552 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084398s
	[INFO] 10.244.0.23:49609 - 4240 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016395s
	[INFO] 10.244.0.23:35563 - 18123 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000334952s
	[INFO] 10.244.0.23:60193 - 19299 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001622887s
	[INFO] 10.244.0.23:48643 - 25052 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003869549s
	[INFO] 10.244.0.27:49832 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238957s
	[INFO] 10.244.0.27:37521 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157504s
	
	
	==> describe nodes <==
	Name:               addons-410268
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-410268
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=addons-410268
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T11_15_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-410268
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 11:15:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-410268
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 11:20:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 11:18:27 +0000   Wed, 17 Dec 2025 11:15:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 11:18:27 +0000   Wed, 17 Dec 2025 11:15:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 11:18:27 +0000   Wed, 17 Dec 2025 11:15:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 11:18:27 +0000   Wed, 17 Dec 2025 11:15:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.28
	  Hostname:    addons-410268
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 7773aa7269d04e148c7e331a57e11558
	  System UUID:                7773aa72-69d0-4e14-8c7e-331a57e11558
	  Boot ID:                    3b845b4b-5fae-44f0-b3f6-c52161226314
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     hello-world-app-5d498dc89-btq58              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-wnptk    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m15s
	  kube-system                 amd-gpu-device-plugin-7vz7s                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 coredns-66bc5c9577-f9dfv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m22s
	  kube-system                 etcd-addons-410268                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m28s
	  kube-system                 kube-apiserver-addons-410268                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-controller-manager-addons-410268        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-6pdv6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-scheduler-addons-410268                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m34s (x8 over 4m34s)  kubelet          Node addons-410268 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s (x8 over 4m34s)  kubelet          Node addons-410268 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s (x7 over 4m34s)  kubelet          Node addons-410268 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m28s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m28s                  kubelet          Node addons-410268 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s                  kubelet          Node addons-410268 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s                  kubelet          Node addons-410268 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m27s                  kubelet          Node addons-410268 status is now: NodeReady
	  Normal  RegisteredNode           4m23s                  node-controller  Node addons-410268 event: Registered Node addons-410268 in Controller
	
	
	==> dmesg <==
	[  +0.048223] kauditd_printk_skb: 405 callbacks suppressed
	[  +2.961015] kauditd_printk_skb: 293 callbacks suppressed
	[  +6.055211] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.879282] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.872741] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.153815] kauditd_printk_skb: 107 callbacks suppressed
	[  +1.010193] kauditd_printk_skb: 73 callbacks suppressed
	[Dec17 11:17] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.470482] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.000054] kauditd_printk_skb: 96 callbacks suppressed
	[  +1.750391] kauditd_printk_skb: 65 callbacks suppressed
	[  +7.105559] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.424473] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.571002] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.861369] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.636061] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.671797] kauditd_printk_skb: 141 callbacks suppressed
	[Dec17 11:18] kauditd_printk_skb: 77 callbacks suppressed
	[  +1.608255] kauditd_printk_skb: 167 callbacks suppressed
	[  +2.906072] kauditd_printk_skb: 78 callbacks suppressed
	[  +2.311565] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.000026] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.854772] kauditd_printk_skb: 41 callbacks suppressed
	[  +3.449303] kauditd_printk_skb: 127 callbacks suppressed
	[Dec17 11:20] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [9c48b14c60d6a45580132868fce1558152914faf87b5d0e6df6f66364e511801] <==
	{"level":"warn","ts":"2025-12-17T11:16:52.742399Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.16272ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:16:52.742417Z","caller":"traceutil/trace.go:172","msg":"trace[1694931419] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1021; }","duration":"162.182119ms","start":"2025-12-17T11:16:52.580230Z","end":"2025-12-17T11:16:52.742412Z","steps":["trace[1694931419] 'agreement among raft nodes before linearized reading'  (duration: 162.156632ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T11:16:52.743348Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T11:16:52.366117Z","time spent":"375.795334ms","remote":"127.0.0.1:56468","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":9227,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-2nlrj\" mod_revision:1001 > success:<request_put:<key:\"/registry/pods/gadget/gadget-2nlrj\" value_size:9185 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-2nlrj\" > >"}
	{"level":"info","ts":"2025-12-17T11:17:01.376521Z","caller":"traceutil/trace.go:172","msg":"trace[1767755788] transaction","detail":"{read_only:false; response_revision:1077; number_of_response:1; }","duration":"133.166443ms","start":"2025-12-17T11:17:01.243340Z","end":"2025-12-17T11:17:01.376506Z","steps":["trace[1767755788] 'process raft request'  (duration: 133.062306ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T11:17:05.669342Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"276.507343ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:17:05.669399Z","caller":"traceutil/trace.go:172","msg":"trace[188952421] range","detail":"{range_begin:/registry/validatingadmissionpolicies; range_end:; response_count:0; response_revision:1093; }","duration":"276.574137ms","start":"2025-12-17T11:17:05.392812Z","end":"2025-12-17T11:17:05.669386Z","steps":["trace[188952421] 'range keys from in-memory index tree'  (duration: 275.848761ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:17:15.210459Z","caller":"traceutil/trace.go:172","msg":"trace[716277579] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1189; }","duration":"203.790448ms","start":"2025-12-17T11:17:15.006652Z","end":"2025-12-17T11:17:15.210443Z","steps":["trace[716277579] 'read index received'  (duration: 203.785757ms)","trace[716277579] 'applied index is now lower than readState.Index'  (duration: 3.91µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:17:15.210576Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.909151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:17:15.210596Z","caller":"traceutil/trace.go:172","msg":"trace[1277096382] range","detail":"{range_begin:/registry/podtemplates; range_end:; response_count:0; response_revision:1161; }","duration":"203.941291ms","start":"2025-12-17T11:17:15.006648Z","end":"2025-12-17T11:17:15.210589Z","steps":["trace[1277096382] 'agreement among raft nodes before linearized reading'  (duration: 203.879804ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T11:17:15.210930Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.62392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:17:15.210954Z","caller":"traceutil/trace.go:172","msg":"trace[1347680689] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1162; }","duration":"140.653677ms","start":"2025-12-17T11:17:15.070294Z","end":"2025-12-17T11:17:15.210947Z","steps":["trace[1347680689] 'agreement among raft nodes before linearized reading'  (duration: 140.607399ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:17:15.211212Z","caller":"traceutil/trace.go:172","msg":"trace[902206238] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"206.399719ms","start":"2025-12-17T11:17:15.004801Z","end":"2025-12-17T11:17:15.211201Z","steps":["trace[902206238] 'process raft request'  (duration: 205.985998ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:17:18.245932Z","caller":"traceutil/trace.go:172","msg":"trace[1930351079] linearizableReadLoop","detail":"{readStateIndex:1197; appliedIndex:1197; }","duration":"176.245604ms","start":"2025-12-17T11:17:18.069671Z","end":"2025-12-17T11:17:18.245916Z","steps":["trace[1930351079] 'read index received'  (duration: 176.241554ms)","trace[1930351079] 'applied index is now lower than readState.Index'  (duration: 3.353µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T11:17:18.246048Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.362128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:17:18.246065Z","caller":"traceutil/trace.go:172","msg":"trace[528151282] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1169; }","duration":"176.393264ms","start":"2025-12-17T11:17:18.069667Z","end":"2025-12-17T11:17:18.246061Z","steps":["trace[528151282] 'agreement among raft nodes before linearized reading'  (duration: 176.330722ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:17:18.246779Z","caller":"traceutil/trace.go:172","msg":"trace[421943312] transaction","detail":"{read_only:false; response_revision:1170; number_of_response:1; }","duration":"201.392049ms","start":"2025-12-17T11:17:18.045347Z","end":"2025-12-17T11:17:18.246739Z","steps":["trace[421943312] 'process raft request'  (duration: 200.867739ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:17:52.594334Z","caller":"traceutil/trace.go:172","msg":"trace[1020797586] linearizableReadLoop","detail":"{readStateIndex:1396; appliedIndex:1396; }","duration":"122.428543ms","start":"2025-12-17T11:17:52.471884Z","end":"2025-12-17T11:17:52.594312Z","steps":["trace[1020797586] 'read index received'  (duration: 122.420607ms)","trace[1020797586] 'applied index is now lower than readState.Index'  (duration: 7µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T11:17:52.594454Z","caller":"traceutil/trace.go:172","msg":"trace[1302503521] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"149.808704ms","start":"2025-12-17T11:17:52.444635Z","end":"2025-12-17T11:17:52.594443Z","steps":["trace[1302503521] 'process raft request'  (duration: 149.70749ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T11:17:52.594493Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.591887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T11:17:52.594517Z","caller":"traceutil/trace.go:172","msg":"trace[504402566] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1361; }","duration":"122.630694ms","start":"2025-12-17T11:17:52.471880Z","end":"2025-12-17T11:17:52.594511Z","steps":["trace[504402566] 'agreement among raft nodes before linearized reading'  (duration: 122.560785ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:17:53.822275Z","caller":"traceutil/trace.go:172","msg":"trace[1783343896] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1386; }","duration":"231.03912ms","start":"2025-12-17T11:17:53.591221Z","end":"2025-12-17T11:17:53.822260Z","steps":["trace[1783343896] 'process raft request'  (duration: 230.860709ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T11:17:58.897335Z","caller":"traceutil/trace.go:172","msg":"trace[1654990130] linearizableReadLoop","detail":"{readStateIndex:1471; appliedIndex:1471; }","duration":"191.648246ms","start":"2025-12-17T11:17:58.705668Z","end":"2025-12-17T11:17:58.897316Z","steps":["trace[1654990130] 'read index received'  (duration: 191.639817ms)","trace[1654990130] 'applied index is now lower than readState.Index'  (duration: 7.277µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T11:17:58.897711Z","caller":"traceutil/trace.go:172","msg":"trace[1494870268] transaction","detail":"{read_only:false; response_revision:1434; number_of_response:1; }","duration":"274.432867ms","start":"2025-12-17T11:17:58.623267Z","end":"2025-12-17T11:17:58.897700Z","steps":["trace[1494870268] 'process raft request'  (duration: 274.324035ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T11:17:58.897700Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.024025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" limit:1 ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2025-12-17T11:17:58.897765Z","caller":"traceutil/trace.go:172","msg":"trace[1986685695] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:1433; }","duration":"192.105457ms","start":"2025-12-17T11:17:58.705648Z","end":"2025-12-17T11:17:58.897754Z","steps":["trace[1986685695] 'agreement among raft nodes before linearized reading'  (duration: 191.747861ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:20:22 up 4 min,  0 users,  load average: 0.50, 1.04, 0.53
	Linux addons-410268 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e56b63b5b38c52a74a999a91f90bca01f6ee7238bada280b7886f9a5ab521452] <==
	E1217 11:17:00.154665       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.205.218:443: connect: connection refused" logger="UnhandledError"
	E1217 11:17:00.175589       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.205.218:443: connect: connection refused" logger="UnhandledError"
	E1217 11:17:00.217107       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.205.218:443: connect: connection refused" logger="UnhandledError"
	E1217 11:17:00.298835       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.205.218:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.205.218:443: connect: connection refused" logger="UnhandledError"
	I1217 11:17:00.528021       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 11:17:38.500147       1 conn.go:339] Error on socket receive: read tcp 192.168.39.28:8443->192.168.39.1:50194: use of closed network connection
	E1217 11:17:38.687963       1 conn.go:339] Error on socket receive: read tcp 192.168.39.28:8443->192.168.39.1:50232: use of closed network connection
	I1217 11:17:47.757622       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.136.141"}
	I1217 11:17:54.180834       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 11:17:54.389276       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.53.216"}
	I1217 11:18:01.192462       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1217 11:18:16.503066       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1217 11:18:35.304069       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1217 11:18:44.882023       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 11:18:44.885421       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 11:18:44.919637       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 11:18:44.919686       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 11:18:44.953754       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 11:18:44.953808       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 11:18:45.007788       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 11:18:45.007838       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1217 11:18:45.919783       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1217 11:18:46.008455       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1217 11:18:46.022256       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1217 11:20:20.969982       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.106.1"}
	
	
	==> kube-controller-manager [2c14e25e96ab62a91146b02184edba5d238f99effa3b33a0a7fdeac0d6813524] <==
	E1217 11:18:55.047873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1217 11:18:59.208223       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 11:18:59.208270       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 11:18:59.286980       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 11:18:59.287039       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 11:18:59.790669       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:18:59.791736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 11:19:01.395546       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:19:01.396707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 11:19:02.677324       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:19:02.678341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 11:19:13.072480       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:19:13.073537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 11:19:14.990560       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:19:14.991525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 11:19:20.410474       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:19:20.411409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 11:19:44.019718       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:19:44.021136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 11:19:46.025818       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:19:46.026859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 11:19:47.235594       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:19:47.236568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 11:20:16.629032       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 11:20:16.629949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [cb396bc0e216641073e6dc503e2a99bad41acfc829a1f131a5d1f0fc16e232e0] <==
	I1217 11:16:01.320650       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 11:16:01.423767       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 11:16:01.423896       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.28"]
	E1217 11:16:01.424229       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 11:16:01.651698       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 11:16:01.651749       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 11:16:01.651777       1 server_linux.go:132] "Using iptables Proxier"
	I1217 11:16:01.687578       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 11:16:01.688388       1 server.go:527] "Version info" version="v1.34.3"
	I1217 11:16:01.688402       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 11:16:01.693470       1 config.go:200] "Starting service config controller"
	I1217 11:16:01.693482       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 11:16:01.698899       1 config.go:106] "Starting endpoint slice config controller"
	I1217 11:16:01.698918       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 11:16:01.699635       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 11:16:01.699644       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 11:16:01.700775       1 config.go:309] "Starting node config controller"
	I1217 11:16:01.700786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 11:16:01.700792       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 11:16:01.794076       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 11:16:01.801337       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 11:16:01.801462       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [de5475dd01a03ce3358f3c656b415b83b074580e46c6ad1a130279cec74872ab] <==
	E1217 11:15:52.141286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 11:15:52.140815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:15:52.141108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:15:52.141473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 11:15:52.141537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:15:52.140508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 11:15:52.943654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 11:15:52.957604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 11:15:53.007599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 11:15:53.024263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 11:15:53.066280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 11:15:53.130106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 11:15:53.156953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 11:15:53.157824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 11:15:53.176905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 11:15:53.213603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 11:15:53.216208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 11:15:53.230760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 11:15:53.231015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 11:15:53.275220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 11:15:53.338533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 11:15:53.492656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 11:15:53.585005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 11:15:53.641273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1217 11:15:56.215290       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 11:18:54 addons-410268 kubelet[1508]: E1217 11:18:54.973365    1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970334972725186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:18:54 addons-410268 kubelet[1508]: E1217 11:18:54.973416    1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970334972725186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:18:55 addons-410268 kubelet[1508]: I1217 11:18:55.929150    1508 scope.go:117] "RemoveContainer" containerID="59110e977b9687b2a3d792445201c8d82a14bb0a58e61b29a5fe8c7ff8eebccc"
	Dec 17 11:18:56 addons-410268 kubelet[1508]: I1217 11:18:56.043881    1508 scope.go:117] "RemoveContainer" containerID="4fb9f07ee8c9761b02b16fc9f8e32457819829a5b1c4d3f927a859516dff11a6"
	Dec 17 11:18:56 addons-410268 kubelet[1508]: I1217 11:18:56.160529    1508 scope.go:117] "RemoveContainer" containerID="b0a987b71e3f8b728e69412e0813b14b1aed68373816135e5ce5e62cd003576d"
	Dec 17 11:18:59 addons-410268 kubelet[1508]: I1217 11:18:59.857597    1508 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 11:19:04 addons-410268 kubelet[1508]: E1217 11:19:04.976896    1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970344976136753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:04 addons-410268 kubelet[1508]: E1217 11:19:04.977021    1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970344976136753  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:09 addons-410268 kubelet[1508]: I1217 11:19:09.857136    1508 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7vz7s" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 11:19:14 addons-410268 kubelet[1508]: E1217 11:19:14.980370    1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970354979499983  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:14 addons-410268 kubelet[1508]: E1217 11:19:14.980408    1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970354979499983  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:24 addons-410268 kubelet[1508]: E1217 11:19:24.983057    1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970364982711518  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:24 addons-410268 kubelet[1508]: E1217 11:19:24.983356    1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970364982711518  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:34 addons-410268 kubelet[1508]: E1217 11:19:34.989462    1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970374988840559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:34 addons-410268 kubelet[1508]: E1217 11:19:34.989512    1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970374988840559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:44 addons-410268 kubelet[1508]: E1217 11:19:44.991921    1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970384991330257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:44 addons-410268 kubelet[1508]: E1217 11:19:44.991961    1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970384991330257  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:54 addons-410268 kubelet[1508]: E1217 11:19:54.995311    1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970394994869269  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:19:54 addons-410268 kubelet[1508]: E1217 11:19:54.995349    1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970394994869269  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:20:04 addons-410268 kubelet[1508]: E1217 11:20:04.998061    1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970404997700538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:20:04 addons-410268 kubelet[1508]: E1217 11:20:04.998083    1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970404997700538  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:20:15 addons-410268 kubelet[1508]: E1217 11:20:15.000438    1508 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765970414999795069  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:20:15 addons-410268 kubelet[1508]: E1217 11:20:15.000460    1508 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765970414999795069  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:551113}  inodes_used:{value:196}}"
	Dec 17 11:20:18 addons-410268 kubelet[1508]: I1217 11:20:18.857508    1508 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-7vz7s" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 11:20:21 addons-410268 kubelet[1508]: I1217 11:20:21.034399    1508 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljvn7\" (UniqueName: \"kubernetes.io/projected/d97fa524-2409-4372-8cad-5cf2b4b55c48-kube-api-access-ljvn7\") pod \"hello-world-app-5d498dc89-btq58\" (UID: \"d97fa524-2409-4372-8cad-5cf2b4b55c48\") " pod="default/hello-world-app-5d498dc89-btq58"
	
	
	==> storage-provisioner [2bae8cb5b25781f10a1935e53f8d3800277b2d6f7cebc7ade9b8ef9ed6582c44] <==
	W1217 11:19:57.511194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:59.514892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:19:59.520892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:01.523934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:01.528870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:03.532001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:03.540481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:05.544034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:05.548614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:07.551945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:07.558795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:09.561722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:09.566790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:11.570534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:11.577603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:13.582814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:13.589249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:15.591829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:15.598700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:17.602415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:17.608139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:19.611763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:19.616723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:21.620739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 11:20:21.627680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-410268 -n addons-410268
helpers_test.go:270: (dbg) Run:  kubectl --context addons-410268 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-btq58 ingress-nginx-admission-create-nfwbf ingress-nginx-admission-patch-xcp88
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-410268 describe pod hello-world-app-5d498dc89-btq58 ingress-nginx-admission-create-nfwbf ingress-nginx-admission-patch-xcp88
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-410268 describe pod hello-world-app-5d498dc89-btq58 ingress-nginx-admission-create-nfwbf ingress-nginx-admission-patch-xcp88: exit status 1 (73.213757ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-btq58
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-410268/192.168.39.28
	Start Time:       Wed, 17 Dec 2025 11:20:20 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ljvn7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ljvn7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-btq58 to addons-410268
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nfwbf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xcp88" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-410268 describe pod hello-world-app-5d498dc89-btq58 ingress-nginx-admission-create-nfwbf ingress-nginx-admission-patch-xcp88: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable ingress-dns --alsologtostderr -v=1: (1.051663954s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable ingress --alsologtostderr -v=1: (7.732583738s)
--- FAIL: TestAddons/parallel/Ingress (157.90s)

                                                
                                    
x
+
TestPreload (149.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-675733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-675733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m32.947940747s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-675733 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-675733 image pull gcr.io/k8s-minikube/busybox: (3.66857211s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-675733
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-675733: (6.753130407s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-675733 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1217 12:09:32.190317 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:09:52.980350 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-675733 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (43.132357461s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-675733 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.3
	registry.k8s.io/kube-proxy:v1.34.3
	registry.k8s.io/kube-controller-manager:v1.34.3
	registry.k8s.io/kube-apiserver:v1.34.3
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-17 12:10:14.555671932 +0000 UTC m=+3344.255849935
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-675733 -n test-preload-675733
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-675733 logs -n 25
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-245791 ssh -n multinode-245791-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 11:57 UTC │ 17 Dec 25 11:57 UTC │
	│ ssh     │ multinode-245791 ssh -n multinode-245791 sudo cat /home/docker/cp-test_multinode-245791-m03_multinode-245791.txt                                          │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 11:57 UTC │ 17 Dec 25 11:57 UTC │
	│ cp      │ multinode-245791 cp multinode-245791-m03:/home/docker/cp-test.txt multinode-245791-m02:/home/docker/cp-test_multinode-245791-m03_multinode-245791-m02.txt │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 11:57 UTC │ 17 Dec 25 11:57 UTC │
	│ ssh     │ multinode-245791 ssh -n multinode-245791-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 11:57 UTC │ 17 Dec 25 11:57 UTC │
	│ ssh     │ multinode-245791 ssh -n multinode-245791-m02 sudo cat /home/docker/cp-test_multinode-245791-m03_multinode-245791-m02.txt                                  │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 11:57 UTC │ 17 Dec 25 11:57 UTC │
	│ node    │ multinode-245791 node stop m03                                                                                                                            │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 11:57 UTC │ 17 Dec 25 11:57 UTC │
	│ node    │ multinode-245791 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 11:57 UTC │ 17 Dec 25 11:57 UTC │
	│ node    │ list -p multinode-245791                                                                                                                                  │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 11:57 UTC │                     │
	│ stop    │ -p multinode-245791                                                                                                                                       │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 11:57 UTC │ 17 Dec 25 12:00 UTC │
	│ start   │ -p multinode-245791 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 12:00 UTC │ 17 Dec 25 12:02 UTC │
	│ node    │ list -p multinode-245791                                                                                                                                  │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 12:02 UTC │                     │
	│ node    │ multinode-245791 node delete m03                                                                                                                          │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 12:02 UTC │ 17 Dec 25 12:02 UTC │
	│ stop    │ multinode-245791 stop                                                                                                                                     │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 12:02 UTC │ 17 Dec 25 12:05 UTC │
	│ start   │ -p multinode-245791 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 12:05 UTC │ 17 Dec 25 12:07 UTC │
	│ node    │ list -p multinode-245791                                                                                                                                  │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 12:07 UTC │                     │
	│ start   │ -p multinode-245791-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-245791-m02 │ jenkins │ v1.37.0 │ 17 Dec 25 12:07 UTC │                     │
	│ start   │ -p multinode-245791-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-245791-m03 │ jenkins │ v1.37.0 │ 17 Dec 25 12:07 UTC │ 17 Dec 25 12:07 UTC │
	│ node    │ add -p multinode-245791                                                                                                                                   │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 12:07 UTC │                     │
	│ delete  │ -p multinode-245791-m03                                                                                                                                   │ multinode-245791-m03 │ jenkins │ v1.37.0 │ 17 Dec 25 12:07 UTC │ 17 Dec 25 12:07 UTC │
	│ delete  │ -p multinode-245791                                                                                                                                       │ multinode-245791     │ jenkins │ v1.37.0 │ 17 Dec 25 12:07 UTC │ 17 Dec 25 12:07 UTC │
	│ start   │ -p test-preload-675733 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-675733  │ jenkins │ v1.37.0 │ 17 Dec 25 12:07 UTC │ 17 Dec 25 12:09 UTC │
	│ image   │ test-preload-675733 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-675733  │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:09 UTC │
	│ stop    │ -p test-preload-675733                                                                                                                                    │ test-preload-675733  │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:09 UTC │
	│ start   │ -p test-preload-675733 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-675733  │ jenkins │ v1.37.0 │ 17 Dec 25 12:09 UTC │ 17 Dec 25 12:10 UTC │
	│ image   │ test-preload-675733 image list                                                                                                                            │ test-preload-675733  │ jenkins │ v1.37.0 │ 17 Dec 25 12:10 UTC │ 17 Dec 25 12:10 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 12:09:31
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 12:09:31.291593 1375811 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:09:31.291837 1375811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:09:31.291845 1375811 out.go:374] Setting ErrFile to fd 2...
	I1217 12:09:31.291850 1375811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:09:31.292069 1375811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 12:09:31.292535 1375811 out.go:368] Setting JSON to false
	I1217 12:09:31.293511 1375811 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21110,"bootTime":1765952261,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 12:09:31.293574 1375811 start.go:143] virtualization: kvm guest
	I1217 12:09:31.296667 1375811 out.go:179] * [test-preload-675733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 12:09:31.298076 1375811 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 12:09:31.298088 1375811 notify.go:221] Checking for updates...
	I1217 12:09:31.300512 1375811 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:09:31.301977 1375811 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 12:09:31.303258 1375811 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 12:09:31.304771 1375811 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 12:09:31.305993 1375811 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:09:31.307502 1375811 config.go:182] Loaded profile config "test-preload-675733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:09:31.308051 1375811 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:09:31.346206 1375811 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 12:09:31.347520 1375811 start.go:309] selected driver: kvm2
	I1217 12:09:31.347538 1375811 start.go:927] validating driver "kvm2" against &{Name:test-preload-675733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.3 ClusterName:test-preload-675733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:09:31.347644 1375811 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:09:31.348590 1375811 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 12:09:31.348630 1375811 cni.go:84] Creating CNI manager for ""
	I1217 12:09:31.348702 1375811 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:09:31.348748 1375811 start.go:353] cluster config:
	{Name:test-preload-675733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:test-preload-675733 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:09:31.348834 1375811 iso.go:125] acquiring lock: {Name:mkf3f94e126ae38d32753ef0086ea24e79e9b483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:09:31.350314 1375811 out.go:179] * Starting "test-preload-675733" primary control-plane node in "test-preload-675733" cluster
	I1217 12:09:31.351344 1375811 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 12:09:31.351385 1375811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 12:09:31.351396 1375811 cache.go:65] Caching tarball of preloaded images
	I1217 12:09:31.351492 1375811 preload.go:238] Found /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 12:09:31.351506 1375811 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 12:09:31.351612 1375811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/config.json ...
	I1217 12:09:31.351830 1375811 start.go:360] acquireMachinesLock for test-preload-675733: {Name:mk7c4b33009a84629d0b15fa1b2a158ad55cf3fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 12:09:31.351919 1375811 start.go:364] duration metric: took 65.049µs to acquireMachinesLock for "test-preload-675733"
	I1217 12:09:31.351941 1375811 start.go:96] Skipping create...Using existing machine configuration
	I1217 12:09:31.351949 1375811 fix.go:54] fixHost starting: 
	I1217 12:09:31.353632 1375811 fix.go:112] recreateIfNeeded on test-preload-675733: state=Stopped err=<nil>
	W1217 12:09:31.353654 1375811 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 12:09:31.355208 1375811 out.go:252] * Restarting existing kvm2 VM for "test-preload-675733" ...
	I1217 12:09:31.355271 1375811 main.go:143] libmachine: starting domain...
	I1217 12:09:31.355288 1375811 main.go:143] libmachine: ensuring networks are active...
	I1217 12:09:31.356055 1375811 main.go:143] libmachine: Ensuring network default is active
	I1217 12:09:31.356523 1375811 main.go:143] libmachine: Ensuring network mk-test-preload-675733 is active
	I1217 12:09:31.356959 1375811 main.go:143] libmachine: getting domain XML...
	I1217 12:09:31.358171 1375811 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-675733</name>
	  <uuid>de5ca0c6-e730-490f-a376-0033ff97a8f5</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/test-preload-675733/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/test-preload-675733/test-preload-675733.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a5:b5:0f'/>
	      <source network='mk-test-preload-675733'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:8f:e6:d0'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 12:09:32.651112 1375811 main.go:143] libmachine: waiting for domain to start...
	I1217 12:09:32.652577 1375811 main.go:143] libmachine: domain is now running
	I1217 12:09:32.652602 1375811 main.go:143] libmachine: waiting for IP...
	I1217 12:09:32.653441 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:32.654069 1375811 main.go:143] libmachine: domain test-preload-675733 has current primary IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:32.654083 1375811 main.go:143] libmachine: found domain IP: 192.168.39.23
	I1217 12:09:32.654089 1375811 main.go:143] libmachine: reserving static IP address...
	I1217 12:09:32.654512 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-675733", mac: "52:54:00:a5:b5:0f", ip: "192.168.39.23"} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:08:02 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:32.654535 1375811 main.go:143] libmachine: skip adding static IP to network mk-test-preload-675733 - found existing host DHCP lease matching {name: "test-preload-675733", mac: "52:54:00:a5:b5:0f", ip: "192.168.39.23"}
	I1217 12:09:32.654549 1375811 main.go:143] libmachine: reserved static IP address 192.168.39.23 for domain test-preload-675733
	I1217 12:09:32.654559 1375811 main.go:143] libmachine: waiting for SSH...
	I1217 12:09:32.654567 1375811 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 12:09:32.656954 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:32.657398 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:08:02 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:32.657437 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:32.657603 1375811 main.go:143] libmachine: Using SSH client type: native
	I1217 12:09:32.657815 1375811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I1217 12:09:32.657824 1375811 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 12:09:35.721266 1375811 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.23:22: connect: no route to host
	I1217 12:09:41.801352 1375811 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.23:22: connect: no route to host
	I1217 12:09:44.905816 1375811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:09:44.909492 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:44.909940 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:44.909963 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:44.910169 1375811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/config.json ...
	I1217 12:09:44.910376 1375811 machine.go:94] provisionDockerMachine start ...
	I1217 12:09:44.912716 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:44.913265 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:44.913299 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:44.913478 1375811 main.go:143] libmachine: Using SSH client type: native
	I1217 12:09:44.913689 1375811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I1217 12:09:44.913699 1375811 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:09:45.019593 1375811 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 12:09:45.019631 1375811 buildroot.go:166] provisioning hostname "test-preload-675733"
	I1217 12:09:45.022950 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.023514 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:45.023573 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.023761 1375811 main.go:143] libmachine: Using SSH client type: native
	I1217 12:09:45.024032 1375811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I1217 12:09:45.024050 1375811 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-675733 && echo "test-preload-675733" | sudo tee /etc/hostname
	I1217 12:09:45.143313 1375811 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-675733
	
	I1217 12:09:45.146648 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.147176 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:45.147205 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.147391 1375811 main.go:143] libmachine: Using SSH client type: native
	I1217 12:09:45.147593 1375811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I1217 12:09:45.147608 1375811 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-675733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-675733/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-675733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:09:45.257591 1375811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:09:45.257636 1375811 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1345916/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1345916/.minikube}
	I1217 12:09:45.257704 1375811 buildroot.go:174] setting up certificates
	I1217 12:09:45.257717 1375811 provision.go:84] configureAuth start
	I1217 12:09:45.260836 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.261354 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:45.261380 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.264040 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.264512 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:45.264540 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.264703 1375811 provision.go:143] copyHostCerts
	I1217 12:09:45.264787 1375811 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem, removing ...
	I1217 12:09:45.264808 1375811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem
	I1217 12:09:45.264893 1375811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem (1082 bytes)
	I1217 12:09:45.265074 1375811 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem, removing ...
	I1217 12:09:45.265089 1375811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem
	I1217 12:09:45.265136 1375811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem (1123 bytes)
	I1217 12:09:45.265227 1375811 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem, removing ...
	I1217 12:09:45.265238 1375811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem
	I1217 12:09:45.265275 1375811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem (1675 bytes)
	I1217 12:09:45.265443 1375811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem org=jenkins.test-preload-675733 san=[127.0.0.1 192.168.39.23 localhost minikube test-preload-675733]
	I1217 12:09:45.413946 1375811 provision.go:177] copyRemoteCerts
	I1217 12:09:45.414027 1375811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:09:45.416713 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.417102 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:45.417127 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.417286 1375811 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/test-preload-675733/id_rsa Username:docker}
	I1217 12:09:45.498477 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:09:45.526611 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 12:09:45.555623 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 12:09:45.584273 1375811 provision.go:87] duration metric: took 326.539967ms to configureAuth
	I1217 12:09:45.584304 1375811 buildroot.go:189] setting minikube options for container-runtime
	I1217 12:09:45.584478 1375811 config.go:182] Loaded profile config "test-preload-675733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:09:45.587162 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.587544 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:45.587565 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.587725 1375811 main.go:143] libmachine: Using SSH client type: native
	I1217 12:09:45.587917 1375811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I1217 12:09:45.587932 1375811 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 12:09:45.825017 1375811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 12:09:45.825053 1375811 machine.go:97] duration metric: took 914.663274ms to provisionDockerMachine
	I1217 12:09:45.825073 1375811 start.go:293] postStartSetup for "test-preload-675733" (driver="kvm2")
	I1217 12:09:45.825084 1375811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:09:45.825141 1375811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:09:45.828334 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.828793 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:45.828836 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.829110 1375811 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/test-preload-675733/id_rsa Username:docker}
	I1217 12:09:45.910268 1375811 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:09:45.915031 1375811 info.go:137] Remote host: Buildroot 2025.02
	I1217 12:09:45.915068 1375811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/addons for local assets ...
	I1217 12:09:45.915142 1375811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/files for local assets ...
	I1217 12:09:45.915224 1375811 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem -> 13499072.pem in /etc/ssl/certs
	I1217 12:09:45.915319 1375811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:09:45.926173 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:09:45.960782 1375811 start.go:296] duration metric: took 135.692574ms for postStartSetup
	I1217 12:09:45.960825 1375811 fix.go:56] duration metric: took 14.608877034s for fixHost
	I1217 12:09:45.963816 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.964322 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:45.964361 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:45.964556 1375811 main.go:143] libmachine: Using SSH client type: native
	I1217 12:09:45.964835 1375811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I1217 12:09:45.964850 1375811 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 12:09:46.065500 1375811 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765973386.030639905
	
	I1217 12:09:46.065527 1375811 fix.go:216] guest clock: 1765973386.030639905
	I1217 12:09:46.065535 1375811 fix.go:229] Guest: 2025-12-17 12:09:46.030639905 +0000 UTC Remote: 2025-12-17 12:09:45.960829433 +0000 UTC m=+14.719944953 (delta=69.810472ms)
	I1217 12:09:46.065551 1375811 fix.go:200] guest clock delta is within tolerance: 69.810472ms
	I1217 12:09:46.065557 1375811 start.go:83] releasing machines lock for "test-preload-675733", held for 14.713625743s
	I1217 12:09:46.068919 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:46.069318 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:46.069342 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:46.069545 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:09:46.069586 1375811 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:09:46.069594 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:09:46.069616 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:09:46.069639 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:09:46.069663 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:09:46.069708 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:09:46.069794 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:09:46.072014 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:46.072351 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:46.072374 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:46.072497 1375811 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/test-preload-675733/id_rsa Username:docker}
	I1217 12:09:46.172101 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:09:46.201915 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:09:46.230373 1375811 ssh_runner.go:195] Run: openssl version
	I1217 12:09:46.236592 1375811 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:09:46.248092 1375811 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:09:46.260207 1375811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:09:46.265498 1375811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:09:46.265574 1375811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:09:46.272696 1375811 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:09:46.283871 1375811 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 12:09:46.295713 1375811 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:09:46.307229 1375811 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:09:46.320918 1375811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:09:46.326520 1375811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:09:46.326599 1375811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:09:46.334005 1375811 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:09:46.347434 1375811 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1349907.pem /etc/ssl/certs/51391683.0
	I1217 12:09:46.359460 1375811 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:09:46.370637 1375811 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:09:46.381628 1375811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:09:46.386623 1375811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:09:46.386690 1375811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:09:46.393555 1375811 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:09:46.404590 1375811 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13499072.pem /etc/ssl/certs/3ec20f2e.0
	I1217 12:09:46.416352 1375811 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 12:09:46.420710 1375811 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 12:09:46.425316 1375811 ssh_runner.go:195] Run: cat /version.json
	I1217 12:09:46.425341 1375811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:09:46.459294 1375811 ssh_runner.go:195] Run: systemctl --version
	I1217 12:09:46.465596 1375811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 12:09:46.608197 1375811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:09:46.614954 1375811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:09:46.615033 1375811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:09:46.634185 1375811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 12:09:46.634218 1375811 start.go:496] detecting cgroup driver to use...
	I1217 12:09:46.634283 1375811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 12:09:46.654297 1375811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 12:09:46.671180 1375811 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:09:46.671251 1375811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:09:46.688752 1375811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:09:46.705678 1375811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:09:46.855113 1375811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:09:47.070019 1375811 docker.go:234] disabling docker service ...
	I1217 12:09:47.070097 1375811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:09:47.086687 1375811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:09:47.102961 1375811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:09:47.252829 1375811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:09:47.399925 1375811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:09:47.415305 1375811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:09:47.437148 1375811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 12:09:47.437208 1375811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:09:47.450137 1375811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 12:09:47.450205 1375811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:09:47.462734 1375811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:09:47.475811 1375811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:09:47.488490 1375811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:09:47.502184 1375811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:09:47.515580 1375811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:09:47.536590 1375811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:09:47.549784 1375811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:09:47.560436 1375811 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 12:09:47.560524 1375811 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 12:09:47.581885 1375811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:09:47.593538 1375811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:09:47.735626 1375811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 12:09:47.852959 1375811 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 12:09:47.853055 1375811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 12:09:47.858416 1375811 start.go:564] Will wait 60s for crictl version
	I1217 12:09:47.858487 1375811 ssh_runner.go:195] Run: which crictl
	I1217 12:09:47.862513 1375811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 12:09:47.894291 1375811 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 12:09:47.894383 1375811 ssh_runner.go:195] Run: crio --version
	I1217 12:09:47.922410 1375811 ssh_runner.go:195] Run: crio --version
	I1217 12:09:47.953154 1375811 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 12:09:47.957068 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:47.957450 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:09:47.957471 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:09:47.957617 1375811 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 12:09:47.962164 1375811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:09:47.977338 1375811 kubeadm.go:884] updating cluster {Name:test-preload-675733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.3 ClusterName:test-preload-675733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:09:47.977508 1375811 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 12:09:47.977572 1375811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:09:48.012943 1375811 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.3". assuming images are not preloaded.
	I1217 12:09:48.013033 1375811 ssh_runner.go:195] Run: which lz4
	I1217 12:09:48.017565 1375811 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 12:09:48.022599 1375811 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 12:09:48.022641 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340314847 bytes)
	I1217 12:09:49.266313 1375811 crio.go:462] duration metric: took 1.248778419s to copy over tarball
	I1217 12:09:49.266403 1375811 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 12:09:50.788914 1375811 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.522474272s)
	I1217 12:09:50.788958 1375811 crio.go:469] duration metric: took 1.522611499s to extract the tarball
	I1217 12:09:50.788970 1375811 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 12:09:50.826119 1375811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:09:50.865515 1375811 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 12:09:50.865549 1375811 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:09:50.865559 1375811 kubeadm.go:935] updating node { 192.168.39.23 8443 v1.34.3 crio true true} ...
	I1217 12:09:50.865698 1375811 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-675733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:test-preload-675733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:09:50.865771 1375811 ssh_runner.go:195] Run: crio config
	I1217 12:09:50.909888 1375811 cni.go:84] Creating CNI manager for ""
	I1217 12:09:50.909935 1375811 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:09:50.909961 1375811 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 12:09:50.910015 1375811 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.23 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-675733 NodeName:test-preload-675733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:09:50.910195 1375811 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-675733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.23"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.23"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:09:50.910292 1375811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 12:09:50.922862 1375811 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:09:50.922957 1375811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:09:50.934791 1375811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1217 12:09:50.956468 1375811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 12:09:50.976616 1375811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 12:09:50.997874 1375811 ssh_runner.go:195] Run: grep 192.168.39.23	control-plane.minikube.internal$ /etc/hosts
	I1217 12:09:51.002362 1375811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:09:51.016931 1375811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:09:51.167845 1375811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:09:51.201493 1375811 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733 for IP: 192.168.39.23
	I1217 12:09:51.201521 1375811 certs.go:195] generating shared ca certs ...
	I1217 12:09:51.201544 1375811 certs.go:227] acquiring lock for ca certs: {Name:mk7dff4294abcbe4af041891799d61c459798c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:09:51.201762 1375811 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key
	I1217 12:09:51.201831 1375811 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key
	I1217 12:09:51.201848 1375811 certs.go:257] generating profile certs ...
	I1217 12:09:51.202064 1375811 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/client.key
	I1217 12:09:51.202167 1375811 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/apiserver.key.1083e7a1
	I1217 12:09:51.202228 1375811 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/proxy-client.key
	I1217 12:09:51.202386 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:09:51.202440 1375811 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:09:51.202455 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:09:51.202493 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:09:51.202529 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:09:51.202564 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:09:51.202628 1375811 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:09:51.203552 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:09:51.236896 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:09:51.270617 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:09:51.300158 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:09:51.331208 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 12:09:51.362041 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 12:09:51.391925 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:09:51.421483 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:09:51.451414 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:09:51.482333 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:09:51.512829 1375811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:09:51.543629 1375811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:09:51.564470 1375811 ssh_runner.go:195] Run: openssl version
	I1217 12:09:51.571177 1375811 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:09:51.582831 1375811 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:09:51.594161 1375811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:09:51.599335 1375811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:09:51.599403 1375811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:09:51.606558 1375811 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:09:51.618424 1375811 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:09:51.629937 1375811 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:09:51.642691 1375811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:09:51.648179 1375811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:09:51.648271 1375811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:09:51.655381 1375811 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:09:51.666650 1375811 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:09:51.678056 1375811 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:09:51.690480 1375811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:09:51.696101 1375811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:09:51.696175 1375811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:09:51.703442 1375811 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:09:51.715738 1375811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:09:51.721229 1375811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 12:09:51.728623 1375811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 12:09:51.735886 1375811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 12:09:51.743918 1375811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 12:09:51.751563 1375811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 12:09:51.759043 1375811 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 12:09:51.766503 1375811 kubeadm.go:401] StartCluster: {Name:test-preload-675733 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.3 ClusterName:test-preload-675733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:09:51.766618 1375811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 12:09:51.766714 1375811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:09:51.798721 1375811 cri.go:89] found id: ""
	I1217 12:09:51.798814 1375811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:09:51.811346 1375811 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 12:09:51.811368 1375811 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 12:09:51.811419 1375811 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 12:09:51.823918 1375811 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 12:09:51.824482 1375811 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-675733" does not appear in /home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 12:09:51.824666 1375811 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-1345916/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-675733" cluster setting kubeconfig missing "test-preload-675733" context setting]
	I1217 12:09:51.825060 1375811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/kubeconfig: {Name:mkf9f7ccd4382c7fd64f6772f4fae6c99a70cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:09:51.825721 1375811 kapi.go:59] client config for test-preload-675733: &rest.Config{Host:"https://192.168.39.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/client.key", CAFile:"/home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 12:09:51.826282 1375811 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 12:09:51.826302 1375811 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 12:09:51.826309 1375811 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 12:09:51.826316 1375811 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 12:09:51.826321 1375811 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 12:09:51.826786 1375811 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 12:09:51.838232 1375811 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.23
	I1217 12:09:51.838265 1375811 kubeadm.go:1161] stopping kube-system containers ...
	I1217 12:09:51.838279 1375811 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 12:09:51.838328 1375811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:09:51.870389 1375811 cri.go:89] found id: ""
	I1217 12:09:51.870468 1375811 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 12:09:51.892626 1375811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 12:09:51.904398 1375811 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 12:09:51.904417 1375811 kubeadm.go:158] found existing configuration files:
	
	I1217 12:09:51.904483 1375811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 12:09:51.915482 1375811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 12:09:51.915548 1375811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 12:09:51.927087 1375811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 12:09:51.937957 1375811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 12:09:51.938152 1375811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 12:09:51.950963 1375811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 12:09:51.963872 1375811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 12:09:51.963961 1375811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 12:09:51.976222 1375811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 12:09:51.987399 1375811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 12:09:51.987454 1375811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 12:09:51.998669 1375811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 12:09:52.010079 1375811 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 12:09:52.061828 1375811 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 12:09:53.621967 1375811 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.560090353s)
	I1217 12:09:53.622099 1375811 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 12:09:53.883753 1375811 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 12:09:53.950681 1375811 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 12:09:54.040632 1375811 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:09:54.040720 1375811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:09:54.541246 1375811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:09:55.041202 1375811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:09:55.541700 1375811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:09:56.041120 1375811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:09:56.084409 1375811 api_server.go:72] duration metric: took 2.043775078s to wait for apiserver process to appear ...
	I1217 12:09:56.084438 1375811 api_server.go:88] waiting for apiserver healthz status ...
	I1217 12:09:56.084461 1375811 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I1217 12:09:58.887784 1375811 api_server.go:279] https://192.168.39.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 12:09:58.887814 1375811 api_server.go:103] status: https://192.168.39.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 12:09:58.887831 1375811 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I1217 12:09:58.965790 1375811 api_server.go:279] https://192.168.39.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 12:09:58.965827 1375811 api_server.go:103] status: https://192.168.39.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 12:09:59.085142 1375811 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I1217 12:09:59.111400 1375811 api_server.go:279] https://192.168.39.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 12:09:59.111430 1375811 api_server.go:103] status: https://192.168.39.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 12:09:59.585178 1375811 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I1217 12:09:59.590306 1375811 api_server.go:279] https://192.168.39.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 12:09:59.590336 1375811 api_server.go:103] status: https://192.168.39.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 12:10:00.084692 1375811 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I1217 12:10:00.090241 1375811 api_server.go:279] https://192.168.39.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 12:10:00.090265 1375811 api_server.go:103] status: https://192.168.39.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 12:10:00.584949 1375811 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I1217 12:10:00.589685 1375811 api_server.go:279] https://192.168.39.23:8443/healthz returned 200:
	ok
	I1217 12:10:00.596814 1375811 api_server.go:141] control plane version: v1.34.3
	I1217 12:10:00.596872 1375811 api_server.go:131] duration metric: took 4.512425264s to wait for apiserver health ...
	I1217 12:10:00.596888 1375811 cni.go:84] Creating CNI manager for ""
	I1217 12:10:00.596906 1375811 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:10:00.598883 1375811 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 12:10:00.600269 1375811 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 12:10:00.613733 1375811 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 12:10:00.635487 1375811 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 12:10:00.641211 1375811 system_pods.go:59] 7 kube-system pods found
	I1217 12:10:00.641258 1375811 system_pods.go:61] "coredns-66bc5c9577-5c7gt" [b26c1110-2544-4f30-af2d-1b425d188b0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:00.641273 1375811 system_pods.go:61] "etcd-test-preload-675733" [4baff838-7f45-4af0-930b-8fa283e060c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 12:10:00.641282 1375811 system_pods.go:61] "kube-apiserver-test-preload-675733" [83ef9cb0-012a-470e-9b00-7a6a05b0a59b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 12:10:00.641288 1375811 system_pods.go:61] "kube-controller-manager-test-preload-675733" [4341f9c1-d6a8-41f9-9081-86f2174eef30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 12:10:00.641293 1375811 system_pods.go:61] "kube-proxy-8xrhx" [1eb2b80c-4ba6-4149-918b-46fc783c00b9] Running
	I1217 12:10:00.641298 1375811 system_pods.go:61] "kube-scheduler-test-preload-675733" [ed4af9c6-349b-447c-a662-997bab257930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 12:10:00.641302 1375811 system_pods.go:61] "storage-provisioner" [621da286-3c9e-4223-8a07-8a7f9b479be8] Running
	I1217 12:10:00.641308 1375811 system_pods.go:74] duration metric: took 5.796664ms to wait for pod list to return data ...
	I1217 12:10:00.641315 1375811 node_conditions.go:102] verifying NodePressure condition ...
	I1217 12:10:00.645529 1375811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 12:10:00.645556 1375811 node_conditions.go:123] node cpu capacity is 2
	I1217 12:10:00.645570 1375811 node_conditions.go:105] duration metric: took 4.251454ms to run NodePressure ...
	I1217 12:10:00.645623 1375811 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 12:10:00.901549 1375811 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1217 12:10:00.905156 1375811 kubeadm.go:744] kubelet initialised
	I1217 12:10:00.905182 1375811 kubeadm.go:745] duration metric: took 3.607408ms waiting for restarted kubelet to initialise ...
	I1217 12:10:00.905203 1375811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 12:10:00.922561 1375811 ops.go:34] apiserver oom_adj: -16
	I1217 12:10:00.922595 1375811 kubeadm.go:602] duration metric: took 9.111218577s to restartPrimaryControlPlane
	I1217 12:10:00.922612 1375811 kubeadm.go:403] duration metric: took 9.156119743s to StartCluster
	I1217 12:10:00.922645 1375811 settings.go:142] acquiring lock: {Name:mkab196c8ac23f97b54763cecaa5ac5ac8f7dd0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:00.922755 1375811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 12:10:00.923596 1375811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/kubeconfig: {Name:mkf9f7ccd4382c7fd64f6772f4fae6c99a70cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:10:00.923864 1375811 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 12:10:00.923967 1375811 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:10:00.924076 1375811 addons.go:70] Setting storage-provisioner=true in profile "test-preload-675733"
	I1217 12:10:00.924099 1375811 addons.go:239] Setting addon storage-provisioner=true in "test-preload-675733"
	W1217 12:10:00.924107 1375811 addons.go:248] addon storage-provisioner should already be in state true
	I1217 12:10:00.924117 1375811 addons.go:70] Setting default-storageclass=true in profile "test-preload-675733"
	I1217 12:10:00.924137 1375811 host.go:66] Checking if "test-preload-675733" exists ...
	I1217 12:10:00.924149 1375811 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-675733"
	I1217 12:10:00.924172 1375811 config.go:182] Loaded profile config "test-preload-675733": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:10:00.925941 1375811 out.go:179] * Verifying Kubernetes components...
	I1217 12:10:00.926635 1375811 kapi.go:59] client config for test-preload-675733: &rest.Config{Host:"https://192.168.39.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/client.key", CAFile:"/home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 12:10:00.926715 1375811 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:10:00.926757 1375811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:10:00.927055 1375811 addons.go:239] Setting addon default-storageclass=true in "test-preload-675733"
	W1217 12:10:00.927081 1375811 addons.go:248] addon default-storageclass should already be in state true
	I1217 12:10:00.927109 1375811 host.go:66] Checking if "test-preload-675733" exists ...
	I1217 12:10:00.927941 1375811 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:10:00.927959 1375811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:10:00.928967 1375811 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:10:00.929008 1375811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:10:00.930900 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:10:00.931308 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:10:00.931331 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:10:00.931458 1375811 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/test-preload-675733/id_rsa Username:docker}
	I1217 12:10:00.932079 1375811 main.go:143] libmachine: domain test-preload-675733 has defined MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:10:00.932544 1375811 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:b5:0f", ip: ""} in network mk-test-preload-675733: {Iface:virbr1 ExpiryTime:2025-12-17 13:09:42 +0000 UTC Type:0 Mac:52:54:00:a5:b5:0f Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:test-preload-675733 Clientid:01:52:54:00:a5:b5:0f}
	I1217 12:10:00.932579 1375811 main.go:143] libmachine: domain test-preload-675733 has defined IP address 192.168.39.23 and MAC address 52:54:00:a5:b5:0f in network mk-test-preload-675733
	I1217 12:10:00.932835 1375811 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/test-preload-675733/id_rsa Username:docker}
	I1217 12:10:01.189689 1375811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:10:01.208368 1375811 node_ready.go:35] waiting up to 6m0s for node "test-preload-675733" to be "Ready" ...
	I1217 12:10:01.212382 1375811 node_ready.go:49] node "test-preload-675733" is "Ready"
	I1217 12:10:01.212412 1375811 node_ready.go:38] duration metric: took 4.007163ms for node "test-preload-675733" to be "Ready" ...
	I1217 12:10:01.212426 1375811 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:10:01.212478 1375811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:10:01.233675 1375811 api_server.go:72] duration metric: took 309.770878ms to wait for apiserver process to appear ...
	I1217 12:10:01.233710 1375811 api_server.go:88] waiting for apiserver healthz status ...
	I1217 12:10:01.233740 1375811 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I1217 12:10:01.239644 1375811 api_server.go:279] https://192.168.39.23:8443/healthz returned 200:
	ok
	I1217 12:10:01.240965 1375811 api_server.go:141] control plane version: v1.34.3
	I1217 12:10:01.240995 1375811 api_server.go:131] duration metric: took 7.277983ms to wait for apiserver health ...
	I1217 12:10:01.241004 1375811 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 12:10:01.247963 1375811 system_pods.go:59] 7 kube-system pods found
	I1217 12:10:01.248006 1375811 system_pods.go:61] "coredns-66bc5c9577-5c7gt" [b26c1110-2544-4f30-af2d-1b425d188b0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:01.248016 1375811 system_pods.go:61] "etcd-test-preload-675733" [4baff838-7f45-4af0-930b-8fa283e060c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 12:10:01.248028 1375811 system_pods.go:61] "kube-apiserver-test-preload-675733" [83ef9cb0-012a-470e-9b00-7a6a05b0a59b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 12:10:01.248039 1375811 system_pods.go:61] "kube-controller-manager-test-preload-675733" [4341f9c1-d6a8-41f9-9081-86f2174eef30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 12:10:01.248048 1375811 system_pods.go:61] "kube-proxy-8xrhx" [1eb2b80c-4ba6-4149-918b-46fc783c00b9] Running
	I1217 12:10:01.248060 1375811 system_pods.go:61] "kube-scheduler-test-preload-675733" [ed4af9c6-349b-447c-a662-997bab257930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 12:10:01.248072 1375811 system_pods.go:61] "storage-provisioner" [621da286-3c9e-4223-8a07-8a7f9b479be8] Running
	I1217 12:10:01.248080 1375811 system_pods.go:74] duration metric: took 7.070569ms to wait for pod list to return data ...
	I1217 12:10:01.248088 1375811 default_sa.go:34] waiting for default service account to be created ...
	I1217 12:10:01.250186 1375811 default_sa.go:45] found service account: "default"
	I1217 12:10:01.250205 1375811 default_sa.go:55] duration metric: took 2.102437ms for default service account to be created ...
	I1217 12:10:01.250213 1375811 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 12:10:01.254312 1375811 system_pods.go:86] 7 kube-system pods found
	I1217 12:10:01.254335 1375811 system_pods.go:89] "coredns-66bc5c9577-5c7gt" [b26c1110-2544-4f30-af2d-1b425d188b0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:10:01.254342 1375811 system_pods.go:89] "etcd-test-preload-675733" [4baff838-7f45-4af0-930b-8fa283e060c5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 12:10:01.254351 1375811 system_pods.go:89] "kube-apiserver-test-preload-675733" [83ef9cb0-012a-470e-9b00-7a6a05b0a59b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 12:10:01.254357 1375811 system_pods.go:89] "kube-controller-manager-test-preload-675733" [4341f9c1-d6a8-41f9-9081-86f2174eef30] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 12:10:01.254361 1375811 system_pods.go:89] "kube-proxy-8xrhx" [1eb2b80c-4ba6-4149-918b-46fc783c00b9] Running
	I1217 12:10:01.254367 1375811 system_pods.go:89] "kube-scheduler-test-preload-675733" [ed4af9c6-349b-447c-a662-997bab257930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 12:10:01.254372 1375811 system_pods.go:89] "storage-provisioner" [621da286-3c9e-4223-8a07-8a7f9b479be8] Running
	I1217 12:10:01.254379 1375811 system_pods.go:126] duration metric: took 4.160735ms to wait for k8s-apps to be running ...
	I1217 12:10:01.254385 1375811 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 12:10:01.254434 1375811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 12:10:01.266277 1375811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:10:01.274537 1375811 system_svc.go:56] duration metric: took 20.129097ms WaitForService to wait for kubelet
	I1217 12:10:01.274585 1375811 kubeadm.go:587] duration metric: took 350.686685ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 12:10:01.274617 1375811 node_conditions.go:102] verifying NodePressure condition ...
	I1217 12:10:01.280411 1375811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 12:10:01.280437 1375811 node_conditions.go:123] node cpu capacity is 2
	I1217 12:10:01.280449 1375811 node_conditions.go:105] duration metric: took 5.824434ms to run NodePressure ...
	I1217 12:10:01.280465 1375811 start.go:242] waiting for startup goroutines ...
	I1217 12:10:01.427778 1375811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:10:02.071752 1375811 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 12:10:02.072950 1375811 addons.go:530] duration metric: took 1.148992962s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 12:10:02.072994 1375811 start.go:247] waiting for cluster config update ...
	I1217 12:10:02.073009 1375811 start.go:256] writing updated cluster config ...
	I1217 12:10:02.073256 1375811 ssh_runner.go:195] Run: rm -f paused
	I1217 12:10:02.079578 1375811 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 12:10:02.080092 1375811 kapi.go:59] client config for test-preload-675733: &rest.Config{Host:"https://192.168.39.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/client.crt", KeyFile:"/home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/test-preload-675733/client.key", CAFile:"/home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]u
int8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 12:10:02.083634 1375811 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5c7gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:03.590534 1375811 pod_ready.go:94] pod "coredns-66bc5c9577-5c7gt" is "Ready"
	I1217 12:10:03.590577 1375811 pod_ready.go:86] duration metric: took 1.506921568s for pod "coredns-66bc5c9577-5c7gt" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:03.593712 1375811 pod_ready.go:83] waiting for pod "etcd-test-preload-675733" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 12:10:05.599262 1375811 pod_ready.go:104] pod "etcd-test-preload-675733" is not "Ready", error: <nil>
	W1217 12:10:07.599863 1375811 pod_ready.go:104] pod "etcd-test-preload-675733" is not "Ready", error: <nil>
	W1217 12:10:09.600386 1375811 pod_ready.go:104] pod "etcd-test-preload-675733" is not "Ready", error: <nil>
	I1217 12:10:10.098675 1375811 pod_ready.go:94] pod "etcd-test-preload-675733" is "Ready"
	I1217 12:10:10.098700 1375811 pod_ready.go:86] duration metric: took 6.504963704s for pod "etcd-test-preload-675733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:10.100597 1375811 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-675733" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 12:10:12.106546 1375811 pod_ready.go:104] pod "kube-apiserver-test-preload-675733" is not "Ready", error: <nil>
	I1217 12:10:14.106290 1375811 pod_ready.go:94] pod "kube-apiserver-test-preload-675733" is "Ready"
	I1217 12:10:14.106319 1375811 pod_ready.go:86] duration metric: took 4.005702989s for pod "kube-apiserver-test-preload-675733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:14.108586 1375811 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-675733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:14.112914 1375811 pod_ready.go:94] pod "kube-controller-manager-test-preload-675733" is "Ready"
	I1217 12:10:14.112936 1375811 pod_ready.go:86] duration metric: took 4.326866ms for pod "kube-controller-manager-test-preload-675733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:14.115321 1375811 pod_ready.go:83] waiting for pod "kube-proxy-8xrhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:14.119706 1375811 pod_ready.go:94] pod "kube-proxy-8xrhx" is "Ready"
	I1217 12:10:14.119726 1375811 pod_ready.go:86] duration metric: took 4.389143ms for pod "kube-proxy-8xrhx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:14.121491 1375811 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-675733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:14.304811 1375811 pod_ready.go:94] pod "kube-scheduler-test-preload-675733" is "Ready"
	I1217 12:10:14.304854 1375811 pod_ready.go:86] duration metric: took 183.337721ms for pod "kube-scheduler-test-preload-675733" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 12:10:14.304874 1375811 pod_ready.go:40] duration metric: took 12.225264576s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 12:10:14.350954 1375811 start.go:625] kubectl: 1.34.3, cluster: 1.34.3 (minor skew: 0)
	I1217 12:10:14.353155 1375811 out.go:179] * Done! kubectl is now configured to use "test-preload-675733" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.153495327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af498f77-b593-422f-81c1-08b0cf485ec5 name=/runtime.v1.RuntimeService/Version
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.154718940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3830a26a-a249-406e-a5c9-25573911c831 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.155136097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765973415155113624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3830a26a-a249-406e-a5c9-25573911c831 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.156062723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cdc16e6-d8b2-458f-baa5-e113447b0f3e name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.156255011Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cdc16e6-d8b2-458f-baa5-e113447b0f3e name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.156708116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:645ea7e964b7026ed1ebefb58ab8c9b4194b4a9c09988ddd9df37449d71885b3,PodSandboxId:e8bb4b6ade4f0692476eaaeff691b743f1e743bc30bb39c545749fea8c86acbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765973403040161530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5c7gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b26c1110-2544-4f30-af2d-1b425d188b0a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e7e51bdfea327779ff31fad9481c08a9c06d730f0c7379dcbe627d7c07c998,PodSandboxId:0a4d2bd619f45b4332a7083112a0d9e44f6b8976ae3252328077662579d0ee95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765973399393189458,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb2b80c-4ba6-4149-918b-46fc783c00b9,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18445f4def2148dafbb313c939ac50c0f3f2f826808499805807490fda4ffb95,PodSandboxId:048986bc355977d3e1f40f4b97e18695e9e77298e8d26f22ba9d905045abe723,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765973399401209557,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621da286-3c9e-4223-8a07-8a7f9b479be8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa79780af44764be3a0d2a2b51e6a52a38e948469b44d26b5c78b1fd7aa0f8c,PodSandboxId:31a44558d44ed8f1dbc817f4c69345defdd2333984a0bec76ee905f1544630aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765973395784268955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c732f30ff4182f8b1a68333cbc215063,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fa13a15e6b144207452ed983e5d139841e1488a02c91133248c5076d4d3882,PodSandboxId:bc7ad2408bca8e823f49d1d57957dcd7e3138ce18af79874ae651af3045bf187,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,Crea
tedAt:1765973395764836180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c67db0d06a417b73396be5cbe5d9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a86710e2a268f7647c5daccb47605ae37ed34ee1c2cb284d531742300c7d3c9,PodSandboxId:f481dc0757a913fd7f21399306e0629dff3535d6b1f38ef4f7d1bd81634aa206,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765973395759819911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2b5911849d21ee509c10bc37521319,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fe3d4441a5585aa3a58c13b16b2953ad71cb3fa8dcbf1fa162321b36de92061,PodSandboxId:5b98a3df2f96cefbe8d71c5dc690453eca248199f526104d4c17bb358a7eb500,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765973395717177339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480b0e2d6d18e0794fb55a40b0f95ede,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cdc16e6-d8b2-458f-baa5-e113447b0f3e name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.170452748Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=222cfc0c-d8f7-4ccd-9c90-8d58fb1ffc29 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.170716796Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e8bb4b6ade4f0692476eaaeff691b743f1e743bc30bb39c545749fea8c86acbe,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-5c7gt,Uid:b26c1110-2544-4f30-af2d-1b425d188b0a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765973402824017416,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-5c7gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b26c1110-2544-4f30-af2d-1b425d188b0a,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T12:09:58.978707513Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a4d2bd619f45b4332a7083112a0d9e44f6b8976ae3252328077662579d0ee95,Metadata:&PodSandboxMetadata{Name:kube-proxy-8xrhx,Uid:1eb2b80c-4ba6-4149-918b-46fc783c00b9,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1765973399296233613,Labels:map[string]string{controller-revision-hash: 55c7cb7b75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8xrhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb2b80c-4ba6-4149-918b-46fc783c00b9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T12:09:58.978718655Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:048986bc355977d3e1f40f4b97e18695e9e77298e8d26f22ba9d905045abe723,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:621da286-3c9e-4223-8a07-8a7f9b479be8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765973399290146535,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621da286-3c9e-4223-8a07-8a7f
9b479be8,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T12:09:58.978721320Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b98a3df2f96cefbe8d71c5dc690453eca248199f526104d4c17bb358a7eb500,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-675733,Uid:480b0e2
d6d18e0794fb55a40b0f95ede,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765973395518913280,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480b0e2d6d18e0794fb55a40b0f95ede,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.23:8443,kubernetes.io/config.hash: 480b0e2d6d18e0794fb55a40b0f95ede,kubernetes.io/config.seen: 2025-12-17T12:09:53.958917704Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:31a44558d44ed8f1dbc817f4c69345defdd2333984a0bec76ee905f1544630aa,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-675733,Uid:c732f30ff4182f8b1a68333cbc215063,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765973395514037209,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-t
est-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c732f30ff4182f8b1a68333cbc215063,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.23:2379,kubernetes.io/config.hash: c732f30ff4182f8b1a68333cbc215063,kubernetes.io/config.seen: 2025-12-17T12:09:54.023927586Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc7ad2408bca8e823f49d1d57957dcd7e3138ce18af79874ae651af3045bf187,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-675733,Uid:a0c67db0d06a417b73396be5cbe5d9e7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765973395510443014,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c67db0d06a417b73396be5cbe5d9e7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a0c67db0d06a417b7
3396be5cbe5d9e7,kubernetes.io/config.seen: 2025-12-17T12:09:53.958922974Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f481dc0757a913fd7f21399306e0629dff3535d6b1f38ef4f7d1bd81634aa206,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-675733,Uid:6c2b5911849d21ee509c10bc37521319,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765973395508280219,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2b5911849d21ee509c10bc37521319,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6c2b5911849d21ee509c10bc37521319,kubernetes.io/config.seen: 2025-12-17T12:09:53.958921918Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=222cfc0c-d8f7-4ccd-9c90-8d58fb1ffc29 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.171444520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea332ddb-1e91-4e94-859f-2f0009a16775 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.171630601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea332ddb-1e91-4e94-859f-2f0009a16775 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.171776456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:645ea7e964b7026ed1ebefb58ab8c9b4194b4a9c09988ddd9df37449d71885b3,PodSandboxId:e8bb4b6ade4f0692476eaaeff691b743f1e743bc30bb39c545749fea8c86acbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765973403040161530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5c7gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b26c1110-2544-4f30-af2d-1b425d188b0a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e7e51bdfea327779ff31fad9481c08a9c06d730f0c7379dcbe627d7c07c998,PodSandboxId:0a4d2bd619f45b4332a7083112a0d9e44f6b8976ae3252328077662579d0ee95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765973399393189458,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb2b80c-4ba6-4149-918b-46fc783c00b9,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18445f4def2148dafbb313c939ac50c0f3f2f826808499805807490fda4ffb95,PodSandboxId:048986bc355977d3e1f40f4b97e18695e9e77298e8d26f22ba9d905045abe723,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765973399401209557,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621da286-3c9e-4223-8a07-8a7f9b479be8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa79780af44764be3a0d2a2b51e6a52a38e948469b44d26b5c78b1fd7aa0f8c,PodSandboxId:31a44558d44ed8f1dbc817f4c69345defdd2333984a0bec76ee905f1544630aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765973395784268955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c732f30ff4182f8b1a68333cbc215063,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fa13a15e6b144207452ed983e5d139841e1488a02c91133248c5076d4d3882,PodSandboxId:bc7ad2408bca8e823f49d1d57957dcd7e3138ce18af79874ae651af3045bf187,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,Crea
tedAt:1765973395764836180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c67db0d06a417b73396be5cbe5d9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a86710e2a268f7647c5daccb47605ae37ed34ee1c2cb284d531742300c7d3c9,PodSandboxId:f481dc0757a913fd7f21399306e0629dff3535d6b1f38ef4f7d1bd81634aa206,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765973395759819911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2b5911849d21ee509c10bc37521319,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fe3d4441a5585aa3a58c13b16b2953ad71cb3fa8dcbf1fa162321b36de92061,PodSandboxId:5b98a3df2f96cefbe8d71c5dc690453eca248199f526104d4c17bb358a7eb500,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765973395717177339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480b0e2d6d18e0794fb55a40b0f95ede,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea332ddb-1e91-4e94-859f-2f0009a16775 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.191948335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd83d906-d742-4597-913d-7d4cc705c4d9 name=/runtime.v1.RuntimeService/Version
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.192068462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd83d906-d742-4597-913d-7d4cc705c4d9 name=/runtime.v1.RuntimeService/Version
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.193524023Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ac1df80-1a36-4659-a4c7-e1937f424042 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.193981515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765973415193945165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ac1df80-1a36-4659-a4c7-e1937f424042 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.195054574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81a598e6-36e1-403a-8551-a5d87f644c52 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.195118727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81a598e6-36e1-403a-8551-a5d87f644c52 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.195260349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:645ea7e964b7026ed1ebefb58ab8c9b4194b4a9c09988ddd9df37449d71885b3,PodSandboxId:e8bb4b6ade4f0692476eaaeff691b743f1e743bc30bb39c545749fea8c86acbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765973403040161530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5c7gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b26c1110-2544-4f30-af2d-1b425d188b0a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e7e51bdfea327779ff31fad9481c08a9c06d730f0c7379dcbe627d7c07c998,PodSandboxId:0a4d2bd619f45b4332a7083112a0d9e44f6b8976ae3252328077662579d0ee95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765973399393189458,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb2b80c-4ba6-4149-918b-46fc783c00b9,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18445f4def2148dafbb313c939ac50c0f3f2f826808499805807490fda4ffb95,PodSandboxId:048986bc355977d3e1f40f4b97e18695e9e77298e8d26f22ba9d905045abe723,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765973399401209557,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621da286-3c9e-4223-8a07-8a7f9b479be8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa79780af44764be3a0d2a2b51e6a52a38e948469b44d26b5c78b1fd7aa0f8c,PodSandboxId:31a44558d44ed8f1dbc817f4c69345defdd2333984a0bec76ee905f1544630aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765973395784268955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c732f30ff4182f8b1a68333cbc215063,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fa13a15e6b144207452ed983e5d139841e1488a02c91133248c5076d4d3882,PodSandboxId:bc7ad2408bca8e823f49d1d57957dcd7e3138ce18af79874ae651af3045bf187,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,Crea
tedAt:1765973395764836180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c67db0d06a417b73396be5cbe5d9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a86710e2a268f7647c5daccb47605ae37ed34ee1c2cb284d531742300c7d3c9,PodSandboxId:f481dc0757a913fd7f21399306e0629dff3535d6b1f38ef4f7d1bd81634aa206,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765973395759819911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2b5911849d21ee509c10bc37521319,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fe3d4441a5585aa3a58c13b16b2953ad71cb3fa8dcbf1fa162321b36de92061,PodSandboxId:5b98a3df2f96cefbe8d71c5dc690453eca248199f526104d4c17bb358a7eb500,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765973395717177339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480b0e2d6d18e0794fb55a40b0f95ede,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81a598e6-36e1-403a-8551-a5d87f644c52 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.222573620Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7105280-d63e-4f5e-af54-43f5bfbe4e6b name=/runtime.v1.RuntimeService/Version
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.222705543Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7105280-d63e-4f5e-af54-43f5bfbe4e6b name=/runtime.v1.RuntimeService/Version
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.224099511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9820d8b9-2013-4cb4-907a-447744f5973c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.224653367Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765973415224631386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135813,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9820d8b9-2013-4cb4-907a-447744f5973c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.225597757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b2518ca-a4dc-4e66-a7b7-911f59814f35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.225836402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b2518ca-a4dc-4e66-a7b7-911f59814f35 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 12:10:15 test-preload-675733 crio[884]: time="2025-12-17 12:10:15.226151319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:645ea7e964b7026ed1ebefb58ab8c9b4194b4a9c09988ddd9df37449d71885b3,PodSandboxId:e8bb4b6ade4f0692476eaaeff691b743f1e743bc30bb39c545749fea8c86acbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765973403040161530,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5c7gt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b26c1110-2544-4f30-af2d-1b425d188b0a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e7e51bdfea327779ff31fad9481c08a9c06d730f0c7379dcbe627d7c07c998,PodSandboxId:0a4d2bd619f45b4332a7083112a0d9e44f6b8976ae3252328077662579d0ee95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691,State:CONTAINER_RUNNING,CreatedAt:1765973399393189458,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xrhx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eb2b80c-4ba6-4149-918b-46fc783c00b9,},Annotations:map[string]string{io.kubernetes.container.hash: 2110e446,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18445f4def2148dafbb313c939ac50c0f3f2f826808499805807490fda4ffb95,PodSandboxId:048986bc355977d3e1f40f4b97e18695e9e77298e8d26f22ba9d905045abe723,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765973399401209557,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 621da286-3c9e-4223-8a07-8a7f9b479be8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa79780af44764be3a0d2a2b51e6a52a38e948469b44d26b5c78b1fd7aa0f8c,PodSandboxId:31a44558d44ed8f1dbc817f4c69345defdd2333984a0bec76ee905f1544630aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765973395784268955,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c732f30ff4182f8b1a68333cbc215063,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4fa13a15e6b144207452ed983e5d139841e1488a02c91133248c5076d4d3882,PodSandboxId:bc7ad2408bca8e823f49d1d57957dcd7e3138ce18af79874ae651af3045bf187,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78,State:CONTAINER_RUNNING,Crea
tedAt:1765973395764836180,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c67db0d06a417b73396be5cbe5d9e7,},Annotations:map[string]string{io.kubernetes.container.hash: 20daba5a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a86710e2a268f7647c5daccb47605ae37ed34ee1c2cb284d531742300c7d3c9,PodSandboxId:f481dc0757a913fd7f21399306e0629dff3535d6b1f38ef4f7d1bd81634aa206,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942,State:CONTAINER_RUNNING,CreatedAt:1765973395759819911,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2b5911849d21ee509c10bc37521319,},Annotations:map[string]string{io.kubernetes.container.hash: 294cc10a,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fe3d4441a5585aa3a58c13b16b2953ad71cb3fa8dcbf1fa162321b36de92061,PodSandboxId:5b98a3df2f96cefbe8d71c5dc690453eca248199f526104d4c17bb358a7eb500,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c,State:CONTAINER_RUNNING,CreatedAt:1765973395717177339,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-675733,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 480b0e2d6d18e0794fb55a40b0f95ede,},Annotations:map[string]string{io.kubernetes.container.hash: 79f683c6,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b2518ca-a4dc-4e66-a7b7-911f59814f35 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	645ea7e964b70       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   1                   e8bb4b6ade4f0       coredns-66bc5c9577-5c7gt                      kube-system
	18445f4def214       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       2                   048986bc35597       storage-provisioner                           kube-system
	68e7e51bdfea3       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691   15 seconds ago      Running             kube-proxy                1                   0a4d2bd619f45       kube-proxy-8xrhx                              kube-system
	cfa79780af447       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago      Running             etcd                      1                   31a44558d44ed       etcd-test-preload-675733                      kube-system
	f4fa13a15e6b1       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78   19 seconds ago      Running             kube-scheduler            1                   bc7ad2408bca8       kube-scheduler-test-preload-675733            kube-system
	8a86710e2a268       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942   19 seconds ago      Running             kube-controller-manager   1                   f481dc0757a91       kube-controller-manager-test-preload-675733   kube-system
	3fe3d4441a558       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c   19 seconds ago      Running             kube-apiserver            1                   5b98a3df2f96c       kube-apiserver-test-preload-675733            kube-system
	
	
	==> coredns [645ea7e964b7026ed1ebefb58ab8c9b4194b4a9c09988ddd9df37449d71885b3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40113 - 53474 "HINFO IN 3450315863415755106.1094914712346491599. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041933256s
	
	
	==> describe nodes <==
	Name:               test-preload-675733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-675733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869
	                    minikube.k8s.io/name=test-preload-675733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T12_08_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 12:08:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-675733
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 12:10:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 12:10:00 +0000   Wed, 17 Dec 2025 12:08:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 12:10:00 +0000   Wed, 17 Dec 2025 12:08:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 12:10:00 +0000   Wed, 17 Dec 2025 12:08:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 12:10:00 +0000   Wed, 17 Dec 2025 12:10:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    test-preload-675733
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 de5ca0c6e730490fa3760033ff97a8f5
	  System UUID:                de5ca0c6-e730-490f-a376-0033ff97a8f5
	  Boot ID:                    496b64ff-3440-45bb-9781-3e694d825eaf
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5c7gt                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     94s
	  kube-system                 etcd-test-preload-675733                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         99s
	  kube-system                 kube-apiserver-test-preload-675733             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-test-preload-675733    200m (10%)    0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-proxy-8xrhx                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-test-preload-675733             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 92s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   Starting                 100s               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  100s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  99s                kubelet          Node test-preload-675733 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    99s                kubelet          Node test-preload-675733 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     99s                kubelet          Node test-preload-675733 status is now: NodeHasSufficientPID
	  Normal   NodeReady                99s                kubelet          Node test-preload-675733 status is now: NodeReady
	  Normal   RegisteredNode           95s                node-controller  Node test-preload-675733 event: Registered Node test-preload-675733 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-675733 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-675733 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-675733 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-675733 has been rebooted, boot id: 496b64ff-3440-45bb-9781-3e694d825eaf
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-675733 event: Registered Node test-preload-675733 in Controller
	
	
	==> dmesg <==
	[Dec17 12:09] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000530] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003212] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.019534] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.079907] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.101031] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.482687] kauditd_printk_skb: 168 callbacks suppressed
	[Dec17 12:10] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [cfa79780af44764be3a0d2a2b51e6a52a38e948469b44d26b5c78b1fd7aa0f8c] <==
	{"level":"warn","ts":"2025-12-17T12:09:57.661108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.686380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.703379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.715819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.735000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.750892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.782340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.792593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.806758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.820764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.845613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.857964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.878006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.907549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.925460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.944780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:57.987415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:58.004418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:58.015383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:58.031581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:58.062011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:58.071815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:58.093785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:58.124965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T12:09:58.213175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:10:15 up 0 min,  0 users,  load average: 0.21, 0.06, 0.02
	Linux test-preload-675733 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3fe3d4441a5585aa3a58c13b16b2953ad71cb3fa8dcbf1fa162321b36de92061] <==
	I1217 12:09:59.043528       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 12:09:59.047724       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 12:09:59.049106       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 12:09:59.049208       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 12:09:59.049233       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 12:09:59.049410       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 12:09:59.049570       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 12:09:59.049605       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 12:09:59.049716       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 12:09:59.053551       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 12:09:59.053656       1 aggregator.go:171] initial CRD sync complete...
	I1217 12:09:59.053718       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 12:09:59.053738       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 12:09:59.053744       1 cache.go:39] Caches are synced for autoregister controller
	I1217 12:09:59.060118       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 12:09:59.061464       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1217 12:09:59.075843       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 12:09:59.855670       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 12:10:00.732687       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 12:10:00.777399       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 12:10:00.808102       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 12:10:00.819032       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 12:10:02.464941       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 12:10:02.664772       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 12:10:02.713631       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8a86710e2a268f7647c5daccb47605ae37ed34ee1c2cb284d531742300c7d3c9] <==
	I1217 12:10:02.371103       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 12:10:02.372251       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 12:10:02.373671       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 12:10:02.373848       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 12:10:02.374867       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 12:10:02.379158       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 12:10:02.382500       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 12:10:02.382525       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 12:10:02.382531       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 12:10:02.385353       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 12:10:02.386410       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 12:10:02.386990       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 12:10:02.393941       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 12:10:02.396192       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 12:10:02.396214       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 12:10:02.398476       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 12:10:02.409167       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 12:10:02.409180       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 12:10:02.409862       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1217 12:10:02.410054       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 12:10:02.410376       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 12:10:02.412031       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 12:10:02.414524       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 12:10:02.415856       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 12:10:02.416874       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [68e7e51bdfea327779ff31fad9481c08a9c06d730f0c7379dcbe627d7c07c998] <==
	I1217 12:09:59.574954       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 12:09:59.676216       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 12:09:59.676327       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.23"]
	E1217 12:09:59.676405       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 12:09:59.710416       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 12:09:59.710561       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 12:09:59.710650       1 server_linux.go:132] "Using iptables Proxier"
	I1217 12:09:59.719593       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 12:09:59.720034       1 server.go:527] "Version info" version="v1.34.3"
	I1217 12:09:59.720259       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 12:09:59.725569       1 config.go:200] "Starting service config controller"
	I1217 12:09:59.725596       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 12:09:59.725658       1 config.go:106] "Starting endpoint slice config controller"
	I1217 12:09:59.725677       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 12:09:59.725758       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 12:09:59.725778       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 12:09:59.726882       1 config.go:309] "Starting node config controller"
	I1217 12:09:59.726986       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 12:09:59.727010       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 12:09:59.825896       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 12:09:59.825940       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 12:09:59.825911       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f4fa13a15e6b144207452ed983e5d139841e1488a02c91133248c5076d4d3882] <==
	I1217 12:09:58.311681       1 serving.go:386] Generated self-signed cert in-memory
	W1217 12:09:58.889240       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 12:09:58.889276       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 12:09:58.889311       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 12:09:58.889318       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 12:09:58.986342       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 12:09:58.986377       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 12:09:58.990822       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 12:09:58.990878       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 12:09:58.991060       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 12:09:58.991171       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 12:09:59.091635       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 12:09:58 test-preload-675733 kubelet[1223]: E1217 12:09:58.098403    1223 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"test-preload-675733\" not found" node="test-preload-675733"
	Dec 17 12:09:58 test-preload-675733 kubelet[1223]: I1217 12:09:58.954657    1223 apiserver.go:52] "Watching apiserver"
	Dec 17 12:09:58 test-preload-675733 kubelet[1223]: E1217 12:09:58.979752    1223 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-5c7gt" podUID="b26c1110-2544-4f30-af2d-1b425d188b0a"
	Dec 17 12:09:58 test-preload-675733 kubelet[1223]: I1217 12:09:58.981617    1223 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: I1217 12:09:59.018167    1223 kubelet_node_status.go:124] "Node was previously registered" node="test-preload-675733"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: I1217 12:09:59.018250    1223 kubelet_node_status.go:78] "Successfully registered node" node="test-preload-675733"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: I1217 12:09:59.018273    1223 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: I1217 12:09:59.019745    1223 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: I1217 12:09:59.020541    1223 setters.go:543] "Node became not ready" node="test-preload-675733" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-17T12:09:59Z","lastTransitionTime":"2025-12-17T12:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: I1217 12:09:59.055251    1223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1eb2b80c-4ba6-4149-918b-46fc783c00b9-xtables-lock\") pod \"kube-proxy-8xrhx\" (UID: \"1eb2b80c-4ba6-4149-918b-46fc783c00b9\") " pod="kube-system/kube-proxy-8xrhx"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: I1217 12:09:59.055279    1223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1eb2b80c-4ba6-4149-918b-46fc783c00b9-lib-modules\") pod \"kube-proxy-8xrhx\" (UID: \"1eb2b80c-4ba6-4149-918b-46fc783c00b9\") " pod="kube-system/kube-proxy-8xrhx"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: I1217 12:09:59.055354    1223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/621da286-3c9e-4223-8a07-8a7f9b479be8-tmp\") pod \"storage-provisioner\" (UID: \"621da286-3c9e-4223-8a07-8a7f9b479be8\") " pod="kube-system/storage-provisioner"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: E1217 12:09:59.055801    1223 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: E1217 12:09:59.055891    1223 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b26c1110-2544-4f30-af2d-1b425d188b0a-config-volume podName:b26c1110-2544-4f30-af2d-1b425d188b0a nodeName:}" failed. No retries permitted until 2025-12-17 12:09:59.555871576 +0000 UTC m=+5.699464211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b26c1110-2544-4f30-af2d-1b425d188b0a-config-volume") pod "coredns-66bc5c9577-5c7gt" (UID: "b26c1110-2544-4f30-af2d-1b425d188b0a") : object "kube-system"/"coredns" not registered
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: I1217 12:09:59.105562    1223 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-675733"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: E1217 12:09:59.123752    1223 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-675733\" already exists" pod="kube-system/kube-scheduler-test-preload-675733"
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: E1217 12:09:59.560822    1223 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 12:09:59 test-preload-675733 kubelet[1223]: E1217 12:09:59.561029    1223 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b26c1110-2544-4f30-af2d-1b425d188b0a-config-volume podName:b26c1110-2544-4f30-af2d-1b425d188b0a nodeName:}" failed. No retries permitted until 2025-12-17 12:10:00.560872125 +0000 UTC m=+6.704464757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b26c1110-2544-4f30-af2d-1b425d188b0a-config-volume") pod "coredns-66bc5c9577-5c7gt" (UID: "b26c1110-2544-4f30-af2d-1b425d188b0a") : object "kube-system"/"coredns" not registered
	Dec 17 12:10:00 test-preload-675733 kubelet[1223]: E1217 12:10:00.567211    1223 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 12:10:00 test-preload-675733 kubelet[1223]: E1217 12:10:00.567419    1223 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b26c1110-2544-4f30-af2d-1b425d188b0a-config-volume podName:b26c1110-2544-4f30-af2d-1b425d188b0a nodeName:}" failed. No retries permitted until 2025-12-17 12:10:02.567364636 +0000 UTC m=+8.710957270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b26c1110-2544-4f30-af2d-1b425d188b0a-config-volume") pod "coredns-66bc5c9577-5c7gt" (UID: "b26c1110-2544-4f30-af2d-1b425d188b0a") : object "kube-system"/"coredns" not registered
	Dec 17 12:10:00 test-preload-675733 kubelet[1223]: I1217 12:10:00.903456    1223 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 12:10:04 test-preload-675733 kubelet[1223]: E1217 12:10:04.057673    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765973404056559681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 17 12:10:04 test-preload-675733 kubelet[1223]: E1217 12:10:04.057695    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765973404056559681  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 17 12:10:14 test-preload-675733 kubelet[1223]: E1217 12:10:14.058939    1223 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765973414058485427  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	Dec 17 12:10:14 test-preload-675733 kubelet[1223]: E1217 12:10:14.058959    1223 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765973414058485427  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135813}  inodes_used:{value:64}}"
	
	
	==> storage-provisioner [18445f4def2148dafbb313c939ac50c0f3f2f826808499805807490fda4ffb95] <==
	I1217 12:09:59.492223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-675733 -n test-preload-675733
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-675733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-675733" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-675733
--- FAIL: TestPreload (149.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (73.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-137189 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-137189 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.508393689s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-137189] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-137189" primary control-plane node in "pause-137189" cluster
	* Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-137189" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 12:17:38.291714 1383836 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:17:38.291860 1383836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:17:38.291874 1383836 out.go:374] Setting ErrFile to fd 2...
	I1217 12:17:38.291880 1383836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:17:38.292244 1383836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 12:17:38.292821 1383836 out.go:368] Setting JSON to false
	I1217 12:17:38.295783 1383836 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21597,"bootTime":1765952261,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 12:17:38.295878 1383836 start.go:143] virtualization: kvm guest
	I1217 12:17:38.298768 1383836 out.go:179] * [pause-137189] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 12:17:38.300531 1383836 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 12:17:38.300544 1383836 notify.go:221] Checking for updates...
	I1217 12:17:38.302907 1383836 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:17:38.304064 1383836 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 12:17:38.305165 1383836 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 12:17:38.306159 1383836 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 12:17:38.307178 1383836 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:17:38.308796 1383836 config.go:182] Loaded profile config "pause-137189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:17:38.309537 1383836 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:17:38.359896 1383836 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 12:17:38.360896 1383836 start.go:309] selected driver: kvm2
	I1217 12:17:38.360925 1383836 start.go:927] validating driver "kvm2" against &{Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.3 ClusterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:17:38.361116 1383836 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:17:38.362349 1383836 cni.go:84] Creating CNI manager for ""
	I1217 12:17:38.362436 1383836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:17:38.362517 1383836 start.go:353] cluster config:
	{Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-137189 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:17:38.362700 1383836 iso.go:125] acquiring lock: {Name:mkf3f94e126ae38d32753ef0086ea24e79e9b483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:17:38.364261 1383836 out.go:179] * Starting "pause-137189" primary control-plane node in "pause-137189" cluster
	I1217 12:17:38.365471 1383836 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 12:17:38.365522 1383836 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 12:17:38.365535 1383836 cache.go:65] Caching tarball of preloaded images
	I1217 12:17:38.365658 1383836 preload.go:238] Found /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 12:17:38.365674 1383836 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 12:17:38.365841 1383836 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/config.json ...
	I1217 12:17:38.366165 1383836 start.go:360] acquireMachinesLock for pause-137189: {Name:mk7c4b33009a84629d0b15fa1b2a158ad55cf3fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 12:17:56.433822 1383836 start.go:364] duration metric: took 18.06760151s to acquireMachinesLock for "pause-137189"
	I1217 12:17:56.433878 1383836 start.go:96] Skipping create...Using existing machine configuration
	I1217 12:17:56.433887 1383836 fix.go:54] fixHost starting: 
	I1217 12:17:56.436602 1383836 fix.go:112] recreateIfNeeded on pause-137189: state=Running err=<nil>
	W1217 12:17:56.436647 1383836 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 12:17:56.438325 1383836 out.go:252] * Updating the running kvm2 "pause-137189" VM ...
	I1217 12:17:56.438363 1383836 machine.go:94] provisionDockerMachine start ...
	I1217 12:17:56.441386 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.441852 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.441883 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.442163 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.442430 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:56.442447 1383836 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:17:56.550048 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137189
	
	I1217 12:17:56.550079 1383836 buildroot.go:166] provisioning hostname "pause-137189"
	I1217 12:17:56.553625 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.554109 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.554146 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.554420 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.554672 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:56.554687 1383836 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-137189 && echo "pause-137189" | sudo tee /etc/hostname
	I1217 12:17:56.683365 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137189
	
	I1217 12:17:56.686773 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.687276 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.687311 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.687522 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.687787 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:56.687814 1383836 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-137189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-137189/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-137189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:17:56.793486 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:17:56.793528 1383836 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1345916/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1345916/.minikube}
	I1217 12:17:56.793601 1383836 buildroot.go:174] setting up certificates
	I1217 12:17:56.793615 1383836 provision.go:84] configureAuth start
	I1217 12:17:56.797345 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.797871 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.797907 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.801679 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.802181 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.802228 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.802432 1383836 provision.go:143] copyHostCerts
	I1217 12:17:56.802519 1383836 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem, removing ...
	I1217 12:17:56.802537 1383836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem
	I1217 12:17:56.802624 1383836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem (1082 bytes)
	I1217 12:17:56.802821 1383836 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem, removing ...
	I1217 12:17:56.802838 1383836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem
	I1217 12:17:56.802887 1383836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem (1123 bytes)
	I1217 12:17:56.803155 1383836 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem, removing ...
	I1217 12:17:56.803174 1383836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem
	I1217 12:17:56.803217 1383836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem (1675 bytes)
	I1217 12:17:56.803310 1383836 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem org=jenkins.pause-137189 san=[127.0.0.1 192.168.39.45 localhost minikube pause-137189]
	I1217 12:17:56.918738 1383836 provision.go:177] copyRemoteCerts
	I1217 12:17:56.918800 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:17:56.922240 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.922715 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.922746 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.922948 1383836 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/pause-137189/id_rsa Username:docker}
	I1217 12:17:57.008478 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:17:57.045415 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 12:17:57.085019 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 12:17:57.126836 1383836 provision.go:87] duration metric: took 333.203489ms to configureAuth
	I1217 12:17:57.126872 1383836 buildroot.go:189] setting minikube options for container-runtime
	I1217 12:17:57.127154 1383836 config.go:182] Loaded profile config "pause-137189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:17:57.130473 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:57.131084 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:57.131123 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:57.131334 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:57.131639 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:57.131668 1383836 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 12:18:03.155344 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 12:18:03.155378 1383836 machine.go:97] duration metric: took 6.71700129s to provisionDockerMachine
	I1217 12:18:03.155393 1383836 start.go:293] postStartSetup for "pause-137189" (driver="kvm2")
	I1217 12:18:03.155403 1383836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:18:03.155614 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:18:03.159779 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.160276 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.160325 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.160541 1383836 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/pause-137189/id_rsa Username:docker}
	I1217 12:18:03.248230 1383836 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:18:03.253798 1383836 info.go:137] Remote host: Buildroot 2025.02
	I1217 12:18:03.253824 1383836 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/addons for local assets ...
	I1217 12:18:03.253894 1383836 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/files for local assets ...
	I1217 12:18:03.253967 1383836 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem -> 13499072.pem in /etc/ssl/certs
	I1217 12:18:03.254079 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:18:03.268822 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:18:03.302774 1383836 start.go:296] duration metric: took 147.365771ms for postStartSetup
	I1217 12:18:03.302823 1383836 fix.go:56] duration metric: took 6.868936046s for fixHost
	I1217 12:18:03.306746 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.307312 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.307349 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.307620 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:18:03.307872 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:18:03.307886 1383836 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 12:18:03.416832 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765973883.412776808
	
	I1217 12:18:03.416864 1383836 fix.go:216] guest clock: 1765973883.412776808
	I1217 12:18:03.416874 1383836 fix.go:229] Guest: 2025-12-17 12:18:03.412776808 +0000 UTC Remote: 2025-12-17 12:18:03.302829513 +0000 UTC m=+25.082055048 (delta=109.947295ms)
	I1217 12:18:03.416896 1383836 fix.go:200] guest clock delta is within tolerance: 109.947295ms
	I1217 12:18:03.416903 1383836 start.go:83] releasing machines lock for "pause-137189", held for 6.983049517s
	I1217 12:18:03.420764 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.421324 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.421377 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.421651 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:18:03.421709 1383836 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:18:03.421722 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:18:03.421754 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:18:03.421787 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:18:03.421831 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:18:03.421903 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:18:03.422007 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:18:03.424970 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.425514 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.425547 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.425743 1383836 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/pause-137189/id_rsa Username:docker}
	I1217 12:18:03.530365 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:18:03.566487 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:18:03.605108 1383836 ssh_runner.go:195] Run: openssl version
	I1217 12:18:03.611717 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.624257 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:18:03.641103 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.646757 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.646831 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.654564 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:18:03.670747 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.688345 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:18:03.704482 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.710124 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.710213 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.717800 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:18:03.731082 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.749306 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:18:03.761700 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.767349 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.767419 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.774481 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:18:03.790903 1383836 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 12:18:03.796616 1383836 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 12:18:03.804031 1383836 ssh_runner.go:195] Run: cat /version.json
	I1217 12:18:03.804215 1383836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:18:03.839911 1383836 ssh_runner.go:195] Run: systemctl --version
	I1217 12:18:03.848178 1383836 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 12:18:03.999577 1383836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:18:04.009162 1383836 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:18:04.009277 1383836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:18:04.022017 1383836 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 12:18:04.022048 1383836 start.go:496] detecting cgroup driver to use...
	I1217 12:18:04.022156 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 12:18:04.048238 1383836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 12:18:04.070644 1383836 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:18:04.070717 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:18:04.092799 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:18:04.109622 1383836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:18:04.307257 1383836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:18:04.492787 1383836 docker.go:234] disabling docker service ...
	I1217 12:18:04.492894 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:18:04.524961 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:18:04.543840 1383836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:18:04.726189 1383836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:18:04.894624 1383836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:18:04.910539 1383836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:18:04.934048 1383836 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 12:18:04.934128 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.947224 1383836 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 12:18:04.947324 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.960156 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.974307 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.992701 1383836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:18:05.012211 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:05.030236 1383836 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:05.048278 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:05.066840 1383836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:18:05.082352 1383836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:18:05.103035 1383836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:18:05.654582 1383836 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 12:18:06.073917 1383836 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 12:18:06.074017 1383836 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 12:18:06.082385 1383836 start.go:564] Will wait 60s for crictl version
	I1217 12:18:06.082505 1383836 ssh_runner.go:195] Run: which crictl
	I1217 12:18:06.087517 1383836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 12:18:06.130064 1383836 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 12:18:06.130178 1383836 ssh_runner.go:195] Run: crio --version
	I1217 12:18:06.175515 1383836 ssh_runner.go:195] Run: crio --version
	I1217 12:18:06.429728 1383836 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 12:18:06.435189 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:06.435820 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:06.435856 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:06.436177 1383836 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 12:18:06.446076 1383836 kubeadm.go:884] updating cluster {Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:18:06.446322 1383836 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 12:18:06.446414 1383836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:18:06.573055 1383836 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 12:18:06.573084 1383836 crio.go:433] Images already preloaded, skipping extraction
	I1217 12:18:06.573147 1383836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:18:06.693571 1383836 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 12:18:06.693599 1383836 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:18:06.693609 1383836 kubeadm.go:935] updating node { 192.168.39.45 8443 v1.34.3 crio true true} ...
	I1217 12:18:06.693749 1383836 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-137189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:18:06.693851 1383836 ssh_runner.go:195] Run: crio config
	I1217 12:18:06.770459 1383836 cni.go:84] Creating CNI manager for ""
	I1217 12:18:06.770543 1383836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:18:06.770571 1383836 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 12:18:06.770601 1383836 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.45 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-137189 NodeName:pause-137189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:18:06.770804 1383836 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-137189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.45"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.45"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:18:06.770892 1383836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 12:18:06.795556 1383836 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:18:06.795661 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:18:06.824438 1383836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1217 12:18:06.872321 1383836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 12:18:06.918865 1383836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1217 12:18:06.972804 1383836 ssh_runner.go:195] Run: grep 192.168.39.45	control-plane.minikube.internal$ /etc/hosts
	I1217 12:18:06.988931 1383836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:18:07.345330 1383836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:18:07.376414 1383836 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189 for IP: 192.168.39.45
	I1217 12:18:07.376445 1383836 certs.go:195] generating shared ca certs ...
	I1217 12:18:07.376468 1383836 certs.go:227] acquiring lock for ca certs: {Name:mk7dff4294abcbe4af041891799d61c459798c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:18:07.376687 1383836 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key
	I1217 12:18:07.376766 1383836 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key
	I1217 12:18:07.376780 1383836 certs.go:257] generating profile certs ...
	I1217 12:18:07.376898 1383836 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/client.key
	I1217 12:18:07.376994 1383836 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/apiserver.key.bd5945ce
	I1217 12:18:07.377059 1383836 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/proxy-client.key
	I1217 12:18:07.377235 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:18:07.377290 1383836 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:18:07.377300 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:18:07.377343 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:18:07.377382 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:18:07.377410 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:18:07.377467 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:18:07.378515 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:18:07.493330 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:18:07.574427 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:18:07.652304 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:18:07.713042 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 12:18:07.748572 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 12:18:07.821136 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:18:07.869252 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:18:07.927195 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:18:08.036351 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:18:08.131745 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:18:08.245162 1383836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:18:08.294033 1383836 ssh_runner.go:195] Run: openssl version
	I1217 12:18:08.311655 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.345003 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:18:08.386518 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.399088 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.399191 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.416105 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:18:08.442880 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.476720 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:18:08.501370 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.511800 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.511899 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.524569 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:18:08.553109 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.572282 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:18:08.604798 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.616889 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.617016 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.630019 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:18:08.653536 1383836 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:18:08.666006 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 12:18:08.679610 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 12:18:08.693271 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 12:18:08.704893 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 12:18:08.713855 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 12:18:08.724142 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 12:18:08.735084 1383836 kubeadm.go:401] StartCluster: {Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:18:08.735292 1383836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 12:18:08.735358 1383836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:18:08.791645 1383836 cri.go:89] found id: "310d732afecf22f7a55f5b9312ad9e71118394ff09fc9f7d7c3eaf2de48cad02"
	I1217 12:18:08.791672 1383836 cri.go:89] found id: "d958a10e60bb18b7c6cfef7e922ec6c511df7903bff6d3fe4b2efb6fb756059c"
	I1217 12:18:08.791677 1383836 cri.go:89] found id: "1944d91c94e5183e69b38181a36718fe96c0be4386a877f00873165f1ee8b0b9"
	I1217 12:18:08.791699 1383836 cri.go:89] found id: "0b055307c937cef89a52e812a0b2a6ef7b83b6907d8c9cd10303092d207d0795"
	I1217 12:18:08.791703 1383836 cri.go:89] found id: "d3b342c3641fa821eadfb0cc69320076516baa945a7859a71b098f85087a5809"
	I1217 12:18:08.791709 1383836 cri.go:89] found id: "e1ade8faaa4b5b905c5a7436d0db742ad1837dde6e3fb0d4c61c936242632f16"
	I1217 12:18:08.791714 1383836 cri.go:89] found id: "0efd0e07325d21b417fc524dc11c66a45c3ed8db4fe88ebeed1de2dad9969f68"
	I1217 12:18:08.791718 1383836 cri.go:89] found id: "efc4e6ac4add4a3d2e1c7ae474271d1f76d922e4d443a1d8880e722d4469f383"
	I1217 12:18:08.791722 1383836 cri.go:89] found id: "166a9985e700638b97cb2541dc51b9d8a9c04973af2c6bedc9713270addf8697"
	I1217 12:18:08.791739 1383836 cri.go:89] found id: "119b3f1b9c1651145ae076affb70e219939b71e58a4f9e72b0af00646d803e4d"
	I1217 12:18:08.791752 1383836 cri.go:89] found id: "686717c825f6ddedcf110c0e997874c12e953f5c4803eccb336ff9aa50b1b3e1"
	I1217 12:18:08.791757 1383836 cri.go:89] found id: ""
	I1217 12:18:08.791821 1383836 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-137189 -n pause-137189
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-137189 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-137189 logs -n 25: (1.429240109s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-470455 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                   │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo containerd config dump                                                                                                                                                                                                │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo crio config                                                                                                                                                                                                           │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ delete  │ -p cilium-470455                                                                                                                                                                                                                            │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │ 17 Dec 25 12:15 UTC │
	│ start   │ -p pause-137189 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                                     │ pause-137189           │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │ 17 Dec 25 12:17 UTC │
	│ start   │ -p running-upgrade-616756 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                      │ running-upgrade-616756 │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-630475 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ stopped-upgrade-630475 │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │                     │
	│ delete  │ -p stopped-upgrade-630475                                                                                                                                                                                                                   │ stopped-upgrade-630475 │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │ 17 Dec 25 12:16 UTC │
	│ start   │ -p guest-887598 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-887598           │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │ 17 Dec 25 12:17 UTC │
	│ start   │ -p cert-expiration-026544 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                                     │ cert-expiration-026544 │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │ 17 Dec 25 12:17 UTC │
	│ start   │ -p old-k8s-version-757245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-757245 │ jenkins │ v1.37.0 │ 17 Dec 25 12:17 UTC │                     │
	│ delete  │ -p cert-expiration-026544                                                                                                                                                                                                                   │ cert-expiration-026544 │ jenkins │ v1.37.0 │ 17 Dec 25 12:17 UTC │ 17 Dec 25 12:17 UTC │
	│ start   │ -p no-preload-837348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-837348      │ jenkins │ v1.37.0 │ 17 Dec 25 12:17 UTC │                     │
	│ start   │ -p pause-137189 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-137189           │ jenkins │ v1.37.0 │ 17 Dec 25 12:17 UTC │ 17 Dec 25 12:18 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 12:17:38
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 12:17:38.291714 1383836 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:17:38.291860 1383836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:17:38.291874 1383836 out.go:374] Setting ErrFile to fd 2...
	I1217 12:17:38.291880 1383836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:17:38.292244 1383836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 12:17:38.292821 1383836 out.go:368] Setting JSON to false
	I1217 12:17:38.295783 1383836 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21597,"bootTime":1765952261,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 12:17:38.295878 1383836 start.go:143] virtualization: kvm guest
	I1217 12:17:38.298768 1383836 out.go:179] * [pause-137189] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 12:17:38.300531 1383836 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 12:17:38.300544 1383836 notify.go:221] Checking for updates...
	I1217 12:17:38.302907 1383836 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:17:38.304064 1383836 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 12:17:38.305165 1383836 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 12:17:38.306159 1383836 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 12:17:38.307178 1383836 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:17:38.308796 1383836 config.go:182] Loaded profile config "pause-137189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:17:38.309537 1383836 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:17:38.359896 1383836 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 12:17:38.360896 1383836 start.go:309] selected driver: kvm2
	I1217 12:17:38.360925 1383836 start.go:927] validating driver "kvm2" against &{Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.3 ClusterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:17:38.361116 1383836 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:17:38.362349 1383836 cni.go:84] Creating CNI manager for ""
	I1217 12:17:38.362436 1383836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:17:38.362517 1383836 start.go:353] cluster config:
	{Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-137189 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:17:38.362700 1383836 iso.go:125] acquiring lock: {Name:mkf3f94e126ae38d32753ef0086ea24e79e9b483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:17:38.364261 1383836 out.go:179] * Starting "pause-137189" primary control-plane node in "pause-137189" cluster
	I1217 12:17:35.757546 1383625 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1217 12:17:35.757825 1383625 start.go:159] libmachine.API.Create for "no-preload-837348" (driver="kvm2")
	I1217 12:17:35.757864 1383625 client.go:173] LocalClient.Create starting
	I1217 12:17:35.757928 1383625 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem
	I1217 12:17:35.757966 1383625 main.go:143] libmachine: Decoding PEM data...
	I1217 12:17:35.758013 1383625 main.go:143] libmachine: Parsing certificate...
	I1217 12:17:35.758081 1383625 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem
	I1217 12:17:35.758108 1383625 main.go:143] libmachine: Decoding PEM data...
	I1217 12:17:35.758125 1383625 main.go:143] libmachine: Parsing certificate...
	I1217 12:17:35.758551 1383625 main.go:143] libmachine: creating domain...
	I1217 12:17:35.758560 1383625 main.go:143] libmachine: creating network...
	I1217 12:17:35.760052 1383625 main.go:143] libmachine: found existing default network
	I1217 12:17:35.760388 1383625 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 12:17:35.761487 1383625 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:63:af:da} reservation:<nil>}
	I1217 12:17:35.762478 1383625 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c0dc30}
	I1217 12:17:35.762581 1383625 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-no-preload-837348</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 12:17:35.769459 1383625 main.go:143] libmachine: creating private network mk-no-preload-837348 192.168.50.0/24...
	I1217 12:17:35.859849 1383625 main.go:143] libmachine: private network mk-no-preload-837348 192.168.50.0/24 created
	I1217 12:17:35.860234 1383625 main.go:143] libmachine: <network>
	  <name>mk-no-preload-837348</name>
	  <uuid>40cee4c2-980a-47c7-9a34-797d661c24bf</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:55:e6:ca'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 12:17:35.860271 1383625 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348 ...
	I1217 12:17:35.860293 1383625 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21808-1345916/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 12:17:35.860305 1383625 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 12:17:35.860374 1383625 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21808-1345916/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
	I1217 12:17:36.191158 1383625 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/id_rsa...
	I1217 12:17:36.262155 1383625 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/no-preload-837348.rawdisk...
	I1217 12:17:36.262207 1383625 main.go:143] libmachine: Writing magic tar header
	I1217 12:17:36.262234 1383625 main.go:143] libmachine: Writing SSH key tar header
	I1217 12:17:36.262318 1383625 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348 ...
	I1217 12:17:36.262379 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348
	I1217 12:17:36.262413 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348 (perms=drwx------)
	I1217 12:17:36.262433 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines
	I1217 12:17:36.262446 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines (perms=drwxr-xr-x)
	I1217 12:17:36.262457 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 12:17:36.262469 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube (perms=drwxr-xr-x)
	I1217 12:17:36.262481 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916
	I1217 12:17:36.262494 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916 (perms=drwxrwxr-x)
	I1217 12:17:36.262510 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 12:17:36.262523 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 12:17:36.262539 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 12:17:36.262553 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 12:17:36.262562 1383625 main.go:143] libmachine: checking permissions on dir: /home
	I1217 12:17:36.262571 1383625 main.go:143] libmachine: skipping /home - not owner
	I1217 12:17:36.262576 1383625 main.go:143] libmachine: defining domain...
	I1217 12:17:36.263951 1383625 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>no-preload-837348</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/no-preload-837348.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-no-preload-837348'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 12:17:36.269022 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:bd:30:b6 in network default
	I1217 12:17:36.269812 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:36.269834 1383625 main.go:143] libmachine: starting domain...
	I1217 12:17:36.269839 1383625 main.go:143] libmachine: ensuring networks are active...
	I1217 12:17:36.270730 1383625 main.go:143] libmachine: Ensuring network default is active
	I1217 12:17:36.271183 1383625 main.go:143] libmachine: Ensuring network mk-no-preload-837348 is active
	I1217 12:17:36.271861 1383625 main.go:143] libmachine: getting domain XML...
	I1217 12:17:36.273322 1383625 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>no-preload-837348</name>
	  <uuid>deac9a7a-ba38-47f4-bf03-931dc4f036c8</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/no-preload-837348.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:3f:19:62'/>
	      <source network='mk-no-preload-837348'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bd:30:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 12:17:37.676295 1383625 main.go:143] libmachine: waiting for domain to start...
	I1217 12:17:37.677742 1383625 main.go:143] libmachine: domain is now running
	I1217 12:17:37.677758 1383625 main.go:143] libmachine: waiting for IP...
	I1217 12:17:37.678535 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:37.679152 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:37.679166 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:37.679623 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:37.679660 1383625 retry.go:31] will retry after 282.084865ms: waiting for domain to come up
	I1217 12:17:37.963376 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:37.964438 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:37.964461 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:37.964948 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:37.965001 1383625 retry.go:31] will retry after 316.960465ms: waiting for domain to come up
	I1217 12:17:38.283838 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:38.284796 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:38.284841 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:38.285384 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:38.285445 1383625 retry.go:31] will retry after 315.128777ms: waiting for domain to come up
	I1217 12:17:38.602264 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:38.603247 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:38.603271 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:38.603814 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:38.603865 1383625 retry.go:31] will retry after 398.048219ms: waiting for domain to come up
	I1217 12:17:35.668306 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 12:17:35.668347 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:17:37.890637 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:17:37.891237 1383348 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:06:0d", ip: ""} in network mk-old-k8s-version-757245: {Iface:virbr5 ExpiryTime:2025-12-17 13:17:30 +0000 UTC Type:0 Mac:52:54:00:52:06:0d Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-757245 Clientid:01:52:54:00:52:06:0d}
	I1217 12:17:37.891285 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined IP address 192.168.83.245 and MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:17:37.891530 1383348 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1217 12:17:37.896342 1383348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:17:37.912942 1383348 kubeadm.go:884] updating cluster {Name:old-k8s-version-757245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-757245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:17:37.913097 1383348 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 12:17:37.913168 1383348 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:17:37.947327 1383348 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I1217 12:17:37.947418 1383348 ssh_runner.go:195] Run: which lz4
	I1217 12:17:37.952849 1383348 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 12:17:37.958377 1383348 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 12:17:37.958416 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I1217 12:17:39.697301 1383348 crio.go:462] duration metric: took 1.744496709s to copy over tarball
	I1217 12:17:39.697398 1383348 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 12:17:38.365471 1383836 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 12:17:38.365522 1383836 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 12:17:38.365535 1383836 cache.go:65] Caching tarball of preloaded images
	I1217 12:17:38.365658 1383836 preload.go:238] Found /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 12:17:38.365674 1383836 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 12:17:38.365841 1383836 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/config.json ...
	I1217 12:17:38.366165 1383836 start.go:360] acquireMachinesLock for pause-137189: {Name:mk7c4b33009a84629d0b15fa1b2a158ad55cf3fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 12:17:41.616441 1383348 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.918997848s)
	I1217 12:17:41.616497 1383348 crio.go:469] duration metric: took 1.919161859s to extract the tarball
	I1217 12:17:41.616510 1383348 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 12:17:41.665251 1383348 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:17:41.709178 1383348 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 12:17:41.709203 1383348 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:17:41.709212 1383348 kubeadm.go:935] updating node { 192.168.83.245 8443 v1.28.0 crio true true} ...
	I1217 12:17:41.709306 1383348 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-757245 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-757245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:17:41.709376 1383348 ssh_runner.go:195] Run: crio config
	I1217 12:17:41.758504 1383348 cni.go:84] Creating CNI manager for ""
	I1217 12:17:41.758529 1383348 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:17:41.758551 1383348 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 12:17:41.758572 1383348 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.245 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-757245 NodeName:old-k8s-version-757245 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:17:41.758717 1383348 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-757245"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:17:41.758784 1383348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 12:17:41.772626 1383348 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:17:41.772703 1383348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:17:41.784677 1383348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1217 12:17:41.808834 1383348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 12:17:41.829328 1383348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1217 12:17:41.850704 1383348 ssh_runner.go:195] Run: grep 192.168.83.245	control-plane.minikube.internal$ /etc/hosts
	I1217 12:17:41.855300 1383348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:17:41.870977 1383348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:17:42.013537 1383348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:17:42.034456 1383348 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245 for IP: 192.168.83.245
	I1217 12:17:42.034488 1383348 certs.go:195] generating shared ca certs ...
	I1217 12:17:42.034512 1383348 certs.go:227] acquiring lock for ca certs: {Name:mk7dff4294abcbe4af041891799d61c459798c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.034724 1383348 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key
	I1217 12:17:42.034862 1383348 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key
	I1217 12:17:42.034883 1383348 certs.go:257] generating profile certs ...
	I1217 12:17:42.034967 1383348 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.key
	I1217 12:17:42.035000 1383348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt with IP's: []
	I1217 12:17:42.248648 1383348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt ...
	I1217 12:17:42.248685 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: {Name:mkd4f6188837982d0a0dc17d03070915a2e288df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.248897 1383348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.key ...
	I1217 12:17:42.248917 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.key: {Name:mk437fbf23952f2cba414b4b2fe12f437c02d18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.249044 1383348 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key.b76b40b8
	I1217 12:17:42.249067 1383348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt.b76b40b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.245]
	I1217 12:17:42.326760 1383348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt.b76b40b8 ...
	I1217 12:17:42.326793 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt.b76b40b8: {Name:mkee1f449a6ebd5a3b2ca2b0ba6d404a247a5806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.327019 1383348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key.b76b40b8 ...
	I1217 12:17:42.327040 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key.b76b40b8: {Name:mk39ae6a44c5222412802219a2fbebaf741d5553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.327164 1383348 certs.go:382] copying /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt.b76b40b8 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt
	I1217 12:17:42.327272 1383348 certs.go:386] copying /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key.b76b40b8 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key
	I1217 12:17:42.327358 1383348 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.key
	I1217 12:17:42.327382 1383348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.crt with IP's: []
	I1217 12:17:42.568463 1383348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.crt ...
	I1217 12:17:42.568498 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.crt: {Name:mkcae9c3f6f633131d4dfe9c099eb0ae0021cbe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.568695 1383348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.key ...
	I1217 12:17:42.568713 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.key: {Name:mk23db6e8c67ed2e7d3383233aae3724d08bc9dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.568914 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:17:42.568977 1383348 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:17:42.569008 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:17:42.569052 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:17:42.569092 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:17:42.569128 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:17:42.569197 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:17:42.569799 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:17:42.599975 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:17:42.628365 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:17:42.656627 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:17:42.685196 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 12:17:42.717050 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 12:17:42.752543 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:17:42.796821 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:17:42.838073 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:17:42.870896 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:17:42.903925 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:17:42.937970 1383348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:17:42.963537 1383348 ssh_runner.go:195] Run: openssl version
	I1217 12:17:42.970280 1383348 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:17:42.982797 1383348 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:17:42.995525 1383348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:17:43.001066 1383348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:17:43.001139 1383348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:17:43.008969 1383348 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:17:43.020905 1383348 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:17:43.033842 1383348 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:17:43.046660 1383348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:17:43.051905 1383348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:17:43.051988 1383348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:17:43.059514 1383348 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:17:43.072811 1383348 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:43.084493 1383348 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:17:43.096012 1383348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:43.101487 1383348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:43.101564 1383348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:43.109092 1383348 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:17:43.122596 1383348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:17:43.128369 1383348 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 12:17:43.128442 1383348 kubeadm.go:401] StartCluster: {Name:old-k8s-version-757245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-757245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:17:43.128540 1383348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 12:17:43.128626 1383348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:17:43.171963 1383348 cri.go:89] found id: ""
	I1217 12:17:43.172051 1383348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:17:43.188409 1383348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 12:17:43.201853 1383348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 12:17:43.214183 1383348 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 12:17:43.214215 1383348 kubeadm.go:158] found existing configuration files:
	
	I1217 12:17:43.214278 1383348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 12:17:43.228732 1383348 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 12:17:43.228802 1383348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 12:17:43.244518 1383348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 12:17:43.259301 1383348 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 12:17:43.259367 1383348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 12:17:43.271757 1383348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 12:17:43.284348 1383348 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 12:17:43.284425 1383348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 12:17:43.297181 1383348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 12:17:43.308716 1383348 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 12:17:43.308803 1383348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 12:17:43.321409 1383348 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 12:17:43.386204 1383348 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1217 12:17:43.386313 1383348 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 12:17:43.520121 1383348 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 12:17:43.520282 1383348 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 12:17:43.520411 1383348 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1217 12:17:43.714647 1383348 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 12:17:39.003650 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:39.004580 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:39.004604 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:39.005155 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:39.005205 1383625 retry.go:31] will retry after 748.235257ms: waiting for domain to come up
	I1217 12:17:39.755421 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:39.756285 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:39.756312 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:39.756774 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:39.756821 1383625 retry.go:31] will retry after 860.765677ms: waiting for domain to come up
	I1217 12:17:40.619481 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:40.622585 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:40.622612 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:40.623106 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:40.623161 1383625 retry.go:31] will retry after 1.141529292s: waiting for domain to come up
	I1217 12:17:41.766036 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:41.766884 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:41.766905 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:41.767454 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:41.767498 1383625 retry.go:31] will retry after 1.422452711s: waiting for domain to come up
	I1217 12:17:43.192374 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:43.193156 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:43.193175 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:43.193586 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:43.193633 1383625 retry.go:31] will retry after 1.200790035s: waiting for domain to come up
	I1217 12:17:40.669170 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 12:17:40.669227 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:17:43.738482 1383348 out.go:252]   - Generating certificates and keys ...
	I1217 12:17:43.738655 1383348 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 12:17:43.738795 1383348 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 12:17:43.834748 1383348 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 12:17:44.137451 1383348 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 12:17:44.269800 1383348 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 12:17:44.562263 1383348 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 12:17:44.764289 1383348 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 12:17:44.764517 1383348 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-757245] and IPs [192.168.83.245 127.0.0.1 ::1]
	I1217 12:17:44.887414 1383348 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 12:17:44.887640 1383348 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-757245] and IPs [192.168.83.245 127.0.0.1 ::1]
	I1217 12:17:45.088229 1383348 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 12:17:45.280777 1383348 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 12:17:45.332096 1383348 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 12:17:45.333532 1383348 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 12:17:45.589279 1383348 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 12:17:45.844463 1383348 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 12:17:46.135827 1383348 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 12:17:46.372629 1383348 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 12:17:46.373466 1383348 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 12:17:46.375898 1383348 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 12:17:44.396269 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:44.396921 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:44.396937 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:44.397333 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:44.397373 1383625 retry.go:31] will retry after 1.789377224s: waiting for domain to come up
	I1217 12:17:46.189513 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:46.190539 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:46.190563 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:46.191093 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:46.191143 1383625 retry.go:31] will retry after 2.694089109s: waiting for domain to come up
	I1217 12:17:45.669529 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 12:17:45.669655 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:17:45.669761 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:17:45.728593 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:17:45.728624 1382780 cri.go:89] found id: "f248716c8d0653839148748f3815f2a17947ef670332d4fd614b34a6b1ea84d9"
	I1217 12:17:45.728635 1382780 cri.go:89] found id: ""
	I1217 12:17:45.728660 1382780 logs.go:282] 2 containers: [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f f248716c8d0653839148748f3815f2a17947ef670332d4fd614b34a6b1ea84d9]
	I1217 12:17:45.728736 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.734098 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.738648 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:17:45.738764 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:17:45.785344 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:17:45.785372 1382780 cri.go:89] found id: ""
	I1217 12:17:45.785383 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:17:45.785462 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.789841 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:17:45.789925 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:17:45.836793 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:17:45.836822 1382780 cri.go:89] found id: ""
	I1217 12:17:45.836833 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:17:45.836923 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.842997 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:17:45.843088 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:17:45.893333 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:17:45.893368 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:17:45.893374 1382780 cri.go:89] found id: ""
	I1217 12:17:45.893385 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:17:45.893468 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.898140 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.903766 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:17:45.903864 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:17:45.945304 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:17:45.945339 1382780 cri.go:89] found id: ""
	I1217 12:17:45.945354 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:17:45.945435 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.950875 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:17:45.950959 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:17:45.985484 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:17:45.985512 1382780 cri.go:89] found id: ""
	I1217 12:17:45.985522 1382780 logs.go:282] 1 containers: [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:17:45.985589 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.990036 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:17:45.990146 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:17:46.033514 1382780 cri.go:89] found id: ""
	I1217 12:17:46.033550 1382780 logs.go:282] 0 containers: []
	W1217 12:17:46.033563 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:17:46.033572 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:17:46.033646 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:17:46.072220 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:17:46.072256 1382780 cri.go:89] found id: ""
	I1217 12:17:46.072273 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:17:46.072348 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:46.077833 1382780 logs.go:123] Gathering logs for kubelet ...
	I1217 12:17:46.077865 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 12:17:46.109790 1382780 logs.go:138] Found kubelet problem: Dec 17 12:16:16 running-upgrade-616756 kubelet[1243]: W1217 12:16:16.318011    1243 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-616756" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-616756' and this object
	W1217 12:17:46.110120 1382780 logs.go:138] Found kubelet problem: Dec 17 12:16:16 running-upgrade-616756 kubelet[1243]: E1217 12:16:16.318117    1243 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:running-upgrade-616756\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-616756' and this object" logger="UnhandledError"
	I1217 12:17:46.189112 1382780 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:17:46.189158 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:17:46.281432 1382780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:17:46.281461 1382780 logs.go:123] Gathering logs for kube-apiserver [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f] ...
	I1217 12:17:46.281481 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:17:46.323528 1382780 logs.go:123] Gathering logs for kube-apiserver [f248716c8d0653839148748f3815f2a17947ef670332d4fd614b34a6b1ea84d9] ...
	I1217 12:17:46.323571 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f248716c8d0653839148748f3815f2a17947ef670332d4fd614b34a6b1ea84d9"
	I1217 12:17:46.372689 1382780 logs.go:123] Gathering logs for etcd [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9] ...
	I1217 12:17:46.372723 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:17:46.420134 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:17:46.420179 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:17:46.455695 1382780 logs.go:123] Gathering logs for kube-scheduler [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73] ...
	I1217 12:17:46.455745 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:17:46.548793 1382780 logs.go:123] Gathering logs for kube-scheduler [93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29] ...
	I1217 12:17:46.548836 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:17:46.602964 1382780 logs.go:123] Gathering logs for dmesg ...
	I1217 12:17:46.603027 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:17:46.630074 1382780 logs.go:123] Gathering logs for kube-proxy [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8] ...
	I1217 12:17:46.630131 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:17:46.673440 1382780 logs.go:123] Gathering logs for kube-controller-manager [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499] ...
	I1217 12:17:46.673471 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:17:46.719293 1382780 logs.go:123] Gathering logs for storage-provisioner [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90] ...
	I1217 12:17:46.719341 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:17:46.769022 1382780 logs.go:123] Gathering logs for CRI-O ...
	I1217 12:17:46.769063 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 12:17:47.191306 1382780 logs.go:123] Gathering logs for container status ...
	I1217 12:17:47.191349 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:17:47.248568 1382780 out.go:374] Setting ErrFile to fd 2...
	I1217 12:17:47.248608 1382780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1217 12:17:47.248676 1382780 out.go:285] X Problems detected in kubelet:
	W1217 12:17:47.248700 1382780 out.go:285]   Dec 17 12:16:16 running-upgrade-616756 kubelet[1243]: W1217 12:16:16.318011    1243 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-616756" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-616756' and this object
	W1217 12:17:47.248713 1382780 out.go:285]   Dec 17 12:16:16 running-upgrade-616756 kubelet[1243]: E1217 12:16:16.318117    1243 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:running-upgrade-616756\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-616756' and this object" logger="UnhandledError"
	I1217 12:17:47.248723 1382780 out.go:374] Setting ErrFile to fd 2...
	I1217 12:17:47.248730 1382780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:17:46.379516 1383348 out.go:252]   - Booting up control plane ...
	I1217 12:17:46.379640 1383348 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 12:17:46.379756 1383348 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 12:17:46.380663 1383348 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 12:17:46.405815 1383348 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 12:17:46.407103 1383348 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 12:17:46.407216 1383348 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 12:17:46.638633 1383348 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1217 12:17:48.887414 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:48.888272 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:48.888296 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:48.888733 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:48.888778 1383625 retry.go:31] will retry after 2.517738762s: waiting for domain to come up
	I1217 12:17:51.409568 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:51.410317 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:51.410343 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:51.410728 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:51.410776 1383625 retry.go:31] will retry after 3.467213061s: waiting for domain to come up
	I1217 12:17:54.136069 1383348 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.503805 seconds
	I1217 12:17:54.136296 1383348 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 12:17:54.152520 1383348 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 12:17:54.682315 1383348 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 12:17:54.682639 1383348 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-757245 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 12:17:55.198590 1383348 kubeadm.go:319] [bootstrap-token] Using token: 8niwn2.dshdvdj7hppgjh3a
	I1217 12:17:55.199939 1383348 out.go:252]   - Configuring RBAC rules ...
	I1217 12:17:55.200090 1383348 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 12:17:55.207224 1383348 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 12:17:55.218315 1383348 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 12:17:55.222532 1383348 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 12:17:55.229760 1383348 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 12:17:55.233703 1383348 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 12:17:55.250630 1383348 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 12:17:55.542693 1383348 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 12:17:55.619965 1383348 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 12:17:55.624871 1383348 kubeadm.go:319] 
	I1217 12:17:55.624974 1383348 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 12:17:55.625029 1383348 kubeadm.go:319] 
	I1217 12:17:55.625138 1383348 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 12:17:55.625155 1383348 kubeadm.go:319] 
	I1217 12:17:55.625196 1383348 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 12:17:55.625310 1383348 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 12:17:55.625407 1383348 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 12:17:55.625422 1383348 kubeadm.go:319] 
	I1217 12:17:55.625507 1383348 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 12:17:55.625518 1383348 kubeadm.go:319] 
	I1217 12:17:55.625623 1383348 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 12:17:55.625639 1383348 kubeadm.go:319] 
	I1217 12:17:55.625714 1383348 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 12:17:55.625821 1383348 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 12:17:55.625928 1383348 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 12:17:55.625939 1383348 kubeadm.go:319] 
	I1217 12:17:55.626818 1383348 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 12:17:55.626946 1383348 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 12:17:55.626963 1383348 kubeadm.go:319] 
	I1217 12:17:55.627080 1383348 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8niwn2.dshdvdj7hppgjh3a \
	I1217 12:17:55.627229 1383348 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:03d71c4919b4a2b722377932ade21f7a19ec06bb9a5b5ca567ebf14ade8ad6b0 \
	I1217 12:17:55.627258 1383348 kubeadm.go:319] 	--control-plane 
	I1217 12:17:55.627268 1383348 kubeadm.go:319] 
	I1217 12:17:55.627382 1383348 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 12:17:55.627395 1383348 kubeadm.go:319] 
	I1217 12:17:55.627503 1383348 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8niwn2.dshdvdj7hppgjh3a \
	I1217 12:17:55.627632 1383348 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:03d71c4919b4a2b722377932ade21f7a19ec06bb9a5b5ca567ebf14ade8ad6b0 
	I1217 12:17:55.628693 1383348 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 12:17:55.628736 1383348 cni.go:84] Creating CNI manager for ""
	I1217 12:17:55.628754 1383348 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:17:55.631268 1383348 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 12:17:55.632385 1383348 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 12:17:55.652741 1383348 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 12:17:56.433822 1383836 start.go:364] duration metric: took 18.06760151s to acquireMachinesLock for "pause-137189"
	I1217 12:17:56.433878 1383836 start.go:96] Skipping create...Using existing machine configuration
	I1217 12:17:56.433887 1383836 fix.go:54] fixHost starting: 
	I1217 12:17:56.436602 1383836 fix.go:112] recreateIfNeeded on pause-137189: state=Running err=<nil>
	W1217 12:17:56.436647 1383836 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 12:17:56.438325 1383836 out.go:252] * Updating the running kvm2 "pause-137189" VM ...
	I1217 12:17:56.438363 1383836 machine.go:94] provisionDockerMachine start ...
	I1217 12:17:56.441386 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.441852 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.441883 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.442163 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.442430 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:56.442447 1383836 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:17:56.550048 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137189
	
	I1217 12:17:56.550079 1383836 buildroot.go:166] provisioning hostname "pause-137189"
	I1217 12:17:56.553625 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.554109 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.554146 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.554420 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.554672 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:56.554687 1383836 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-137189 && echo "pause-137189" | sudo tee /etc/hostname
	I1217 12:17:56.683365 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137189
	
	I1217 12:17:56.686773 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.687276 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.687311 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.687522 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.687787 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:56.687814 1383836 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-137189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-137189/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-137189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:17:56.793486 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:17:56.793528 1383836 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1345916/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1345916/.minikube}
	I1217 12:17:56.793601 1383836 buildroot.go:174] setting up certificates
	I1217 12:17:56.793615 1383836 provision.go:84] configureAuth start
	I1217 12:17:56.797345 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.797871 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.797907 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.801679 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.802181 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.802228 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.802432 1383836 provision.go:143] copyHostCerts
	I1217 12:17:56.802519 1383836 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem, removing ...
	I1217 12:17:56.802537 1383836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem
	I1217 12:17:56.802624 1383836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem (1082 bytes)
	I1217 12:17:56.802821 1383836 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem, removing ...
	I1217 12:17:56.802838 1383836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem
	I1217 12:17:56.802887 1383836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem (1123 bytes)
	I1217 12:17:56.803155 1383836 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem, removing ...
	I1217 12:17:56.803174 1383836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem
	I1217 12:17:56.803217 1383836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem (1675 bytes)
	I1217 12:17:56.803310 1383836 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem org=jenkins.pause-137189 san=[127.0.0.1 192.168.39.45 localhost minikube pause-137189]
	I1217 12:17:56.918738 1383836 provision.go:177] copyRemoteCerts
	I1217 12:17:56.918800 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:17:56.922240 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.922715 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.922746 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.922948 1383836 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/pause-137189/id_rsa Username:docker}
	I1217 12:17:57.008478 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:17:57.045415 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 12:17:57.085019 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 12:17:57.126836 1383836 provision.go:87] duration metric: took 333.203489ms to configureAuth
	I1217 12:17:57.126872 1383836 buildroot.go:189] setting minikube options for container-runtime
	I1217 12:17:57.127154 1383836 config.go:182] Loaded profile config "pause-137189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:17:57.130473 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:57.131084 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:57.131123 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:57.131334 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:57.131639 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:57.131668 1383836 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 12:17:54.879253 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:54.879955 1383625 main.go:143] libmachine: domain no-preload-837348 has current primary IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:54.879976 1383625 main.go:143] libmachine: found domain IP: 192.168.50.4
	I1217 12:17:54.880000 1383625 main.go:143] libmachine: reserving static IP address...
	I1217 12:17:54.880455 1383625 main.go:143] libmachine: unable to find host DHCP lease matching {name: "no-preload-837348", mac: "52:54:00:3f:19:62", ip: "192.168.50.4"} in network mk-no-preload-837348
	I1217 12:17:55.113103 1383625 main.go:143] libmachine: reserved static IP address 192.168.50.4 for domain no-preload-837348
	I1217 12:17:55.113135 1383625 main.go:143] libmachine: waiting for SSH...
	I1217 12:17:55.113144 1383625 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 12:17:55.117017 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.117590 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.117625 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.117828 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.118084 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.118096 1383625 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 12:17:55.225035 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:17:55.225489 1383625 main.go:143] libmachine: domain creation complete
	I1217 12:17:55.227467 1383625 machine.go:94] provisionDockerMachine start ...
	I1217 12:17:55.230655 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.231189 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.231228 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.231448 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.231647 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.231657 1383625 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:17:55.344072 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 12:17:55.344104 1383625 buildroot.go:166] provisioning hostname "no-preload-837348"
	I1217 12:17:55.347605 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.348186 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.348236 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.348609 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.348912 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.348930 1383625 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-837348 && echo "no-preload-837348" | sudo tee /etc/hostname
	I1217 12:17:55.479766 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-837348
	
	I1217 12:17:55.483549 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.484128 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.484179 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.484449 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.484709 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.484728 1383625 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-837348' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-837348/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-837348' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:17:55.603469 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:17:55.603504 1383625 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1345916/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1345916/.minikube}
	I1217 12:17:55.603527 1383625 buildroot.go:174] setting up certificates
	I1217 12:17:55.603539 1383625 provision.go:84] configureAuth start
	I1217 12:17:55.606816 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.607497 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.607538 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.610540 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.611022 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.611054 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.611246 1383625 provision.go:143] copyHostCerts
	I1217 12:17:55.611334 1383625 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem, removing ...
	I1217 12:17:55.611349 1383625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem
	I1217 12:17:55.611430 1383625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem (1082 bytes)
	I1217 12:17:55.611581 1383625 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem, removing ...
	I1217 12:17:55.611592 1383625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem
	I1217 12:17:55.611625 1383625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem (1123 bytes)
	I1217 12:17:55.611690 1383625 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem, removing ...
	I1217 12:17:55.611697 1383625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem
	I1217 12:17:55.611721 1383625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem (1675 bytes)
	I1217 12:17:55.611769 1383625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem org=jenkins.no-preload-837348 san=[127.0.0.1 192.168.50.4 localhost minikube no-preload-837348]
	I1217 12:17:55.699239 1383625 provision.go:177] copyRemoteCerts
	I1217 12:17:55.699302 1383625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:17:55.702201 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.702647 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.702684 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.702854 1383625 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/id_rsa Username:docker}
	I1217 12:17:55.790502 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:17:55.831924 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 12:17:55.874154 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 12:17:55.905937 1383625 provision.go:87] duration metric: took 302.383127ms to configureAuth
	I1217 12:17:55.905977 1383625 buildroot.go:189] setting minikube options for container-runtime
	I1217 12:17:55.906246 1383625 config.go:182] Loaded profile config "no-preload-837348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 12:17:55.909788 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.910342 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.910395 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.910622 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.910886 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.910910 1383625 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 12:17:56.158558 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 12:17:56.158589 1383625 machine.go:97] duration metric: took 931.098693ms to provisionDockerMachine
	I1217 12:17:56.158604 1383625 client.go:176] duration metric: took 20.400731835s to LocalClient.Create
	I1217 12:17:56.158653 1383625 start.go:167] duration metric: took 20.400806334s to libmachine.API.Create "no-preload-837348"
	I1217 12:17:56.158668 1383625 start.go:293] postStartSetup for "no-preload-837348" (driver="kvm2")
	I1217 12:17:56.158686 1383625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:17:56.158765 1383625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:17:56.161556 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.162017 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.162057 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.162247 1383625 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/id_rsa Username:docker}
	I1217 12:17:56.245367 1383625 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:17:56.250646 1383625 info.go:137] Remote host: Buildroot 2025.02
	I1217 12:17:56.250677 1383625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/addons for local assets ...
	I1217 12:17:56.250763 1383625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/files for local assets ...
	I1217 12:17:56.250850 1383625 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem -> 13499072.pem in /etc/ssl/certs
	I1217 12:17:56.250969 1383625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:17:56.262524 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:17:56.303233 1383625 start.go:296] duration metric: took 144.54751ms for postStartSetup
	I1217 12:17:56.306769 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.307306 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.307334 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.307632 1383625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/config.json ...
	I1217 12:17:56.307839 1383625 start.go:128] duration metric: took 20.552189886s to createHost
	I1217 12:17:56.310148 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.310492 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.310534 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.310749 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.311069 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:56.311091 1383625 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 12:17:56.433653 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765973876.396441235
	
	I1217 12:17:56.433684 1383625 fix.go:216] guest clock: 1765973876.396441235
	I1217 12:17:56.433693 1383625 fix.go:229] Guest: 2025-12-17 12:17:56.396441235 +0000 UTC Remote: 2025-12-17 12:17:56.307853476 +0000 UTC m=+27.525191438 (delta=88.587759ms)
	I1217 12:17:56.433713 1383625 fix.go:200] guest clock delta is within tolerance: 88.587759ms
	I1217 12:17:56.433720 1383625 start.go:83] releasing machines lock for "no-preload-837348", held for 20.678250148s
	I1217 12:17:56.437692 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.438215 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.438248 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.438468 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:17:56.438511 1383625 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:17:56.438522 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:17:56.438557 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:17:56.438585 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:17:56.438622 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:17:56.438694 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:17:56.438791 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:17:56.441557 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.441884 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.441910 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.442151 1383625 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/id_rsa Username:docker}
	I1217 12:17:56.549655 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:17:56.585688 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:17:56.616212 1383625 ssh_runner.go:195] Run: openssl version
	I1217 12:17:56.623570 1383625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:17:56.635584 1383625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:17:56.647516 1383625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:17:56.653034 1383625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:17:56.653111 1383625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:17:56.661252 1383625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:17:56.675333 1383625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13499072.pem /etc/ssl/certs/3ec20f2e.0
	I1217 12:17:56.689089 1383625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:56.700877 1383625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:17:56.713067 1383625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:56.719161 1383625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:56.719238 1383625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:56.726999 1383625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:17:56.739650 1383625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 12:17:56.753270 1383625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:17:56.765209 1383625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:17:56.779016 1383625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:17:56.784208 1383625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:17:56.784304 1383625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:17:56.791706 1383625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:17:56.805815 1383625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1349907.pem /etc/ssl/certs/51391683.0
	I1217 12:17:56.818101 1383625 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 12:17:56.824120 1383625 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 12:17:56.830076 1383625 ssh_runner.go:195] Run: cat /version.json
	I1217 12:17:56.830156 1383625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:17:56.843122 1383625 ssh_runner.go:195] Run: systemctl --version
	I1217 12:17:56.868225 1383625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 12:17:57.034012 1383625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:17:57.041583 1383625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:17:57.041676 1383625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:17:57.064507 1383625 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 12:17:57.064541 1383625 start.go:496] detecting cgroup driver to use...
	I1217 12:17:57.064634 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 12:17:57.087195 1383625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 12:17:57.110678 1383625 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:17:57.110761 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:17:57.133960 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:17:57.153463 1383625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:17:57.324298 1383625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:17:57.560005 1383625 docker.go:234] disabling docker service ...
	I1217 12:17:57.560092 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:17:57.582076 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:17:57.597492 1383625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:17:57.760621 1383625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:17:57.914460 1383625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:17:57.931574 1383625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:17:57.958173 1383625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 12:17:57.958258 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:57.971481 1383625 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 12:17:57.971552 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:57.984408 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:57.999140 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:58.012639 1383625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:17:58.025964 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:58.039157 1383625 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:58.062671 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:58.076162 1383625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:17:58.087115 1383625 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 12:17:58.087195 1383625 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 12:17:58.110093 1383625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:17:58.125048 1383625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:17:58.265056 1383625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 12:17:58.380571 1383625 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 12:17:58.380672 1383625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 12:17:58.388236 1383625 start.go:564] Will wait 60s for crictl version
	I1217 12:17:58.388318 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.393087 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 12:17:58.432536 1383625 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 12:17:58.432622 1383625 ssh_runner.go:195] Run: crio --version
	I1217 12:17:58.466325 1383625 ssh_runner.go:195] Run: crio --version
	I1217 12:17:58.505115 1383625 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.29.1 ...
	I1217 12:17:58.509551 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:58.510106 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:58.510140 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:58.510393 1383625 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1217 12:17:58.515384 1383625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:17:58.530703 1383625 kubeadm.go:884] updating cluster {Name:no-preload-837348 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:no-preload-837348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:17:58.530851 1383625 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 12:17:58.530903 1383625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:17:58.564874 1383625 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1217 12:17:58.564911 1383625 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 12:17:58.565046 1383625 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:17:58.565069 1383625 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.565081 1383625 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.565100 1383625 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.565110 1383625 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.565052 1383625 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 12:17:58.565048 1383625 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:58.565048 1383625 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.566852 1383625 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 12:17:58.566899 1383625 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.566942 1383625 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:17:58.566943 1383625 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:58.567046 1383625 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.566951 1383625 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.567035 1383625 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.567325 1383625 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.710818 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.714135 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.718171 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.732132 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.735479 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.754251 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1217 12:17:58.759821 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:57.249436 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:17:57.250106 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": dial tcp 192.168.61.103:8443: connect: connection refused
	I1217 12:17:57.250178 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:17:57.250237 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:17:57.297819 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:17:57.297846 1382780 cri.go:89] found id: ""
	I1217 12:17:57.297858 1382780 logs.go:282] 1 containers: [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f]
	I1217 12:17:57.297926 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.303476 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:17:57.303560 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:17:57.353597 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:17:57.353625 1382780 cri.go:89] found id: ""
	I1217 12:17:57.353635 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:17:57.353700 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.358417 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:17:57.358509 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:17:57.402926 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:17:57.402949 1382780 cri.go:89] found id: ""
	I1217 12:17:57.402958 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:17:57.403039 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.407180 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:17:57.407240 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:17:57.445696 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:17:57.445723 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:17:57.445729 1382780 cri.go:89] found id: ""
	I1217 12:17:57.445740 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:17:57.445814 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.450585 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.454551 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:17:57.454632 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:17:57.490470 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:17:57.490499 1382780 cri.go:89] found id: ""
	I1217 12:17:57.490512 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:17:57.490581 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.495781 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:17:57.495866 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:17:57.536883 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:17:57.536912 1382780 cri.go:89] found id: ""
	I1217 12:17:57.536924 1382780 logs.go:282] 1 containers: [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:17:57.537012 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.543212 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:17:57.543292 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:17:57.581413 1382780 cri.go:89] found id: ""
	I1217 12:17:57.581441 1382780 logs.go:282] 0 containers: []
	W1217 12:17:57.581450 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:17:57.581456 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:17:57.581529 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:17:57.617389 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:17:57.617417 1382780 cri.go:89] found id: ""
	I1217 12:17:57.617427 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:17:57.617482 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.621595 1382780 logs.go:123] Gathering logs for kube-scheduler [93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29] ...
	I1217 12:17:57.621619 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:17:57.675762 1382780 logs.go:123] Gathering logs for kube-proxy [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8] ...
	I1217 12:17:57.675801 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:17:57.712549 1382780 logs.go:123] Gathering logs for storage-provisioner [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90] ...
	I1217 12:17:57.712586 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:17:57.747741 1382780 logs.go:123] Gathering logs for CRI-O ...
	I1217 12:17:57.747781 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 12:17:58.081830 1382780 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:17:58.081880 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:17:58.160578 1382780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:17:58.160601 1382780 logs.go:123] Gathering logs for kube-apiserver [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f] ...
	I1217 12:17:58.160615 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:17:58.204179 1382780 logs.go:123] Gathering logs for etcd [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9] ...
	I1217 12:17:58.204222 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:17:58.254156 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:17:58.254197 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:17:58.291751 1382780 logs.go:123] Gathering logs for kube-scheduler [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73] ...
	I1217 12:17:58.291793 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:17:58.361184 1382780 logs.go:123] Gathering logs for kube-controller-manager [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499] ...
	I1217 12:17:58.361222 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:17:58.405383 1382780 logs.go:123] Gathering logs for container status ...
	I1217 12:17:58.405419 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:17:58.447141 1382780 logs.go:123] Gathering logs for kubelet ...
	I1217 12:17:58.447182 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:17:58.551232 1382780 logs.go:123] Gathering logs for dmesg ...
	I1217 12:17:58.551275 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:17:55.710461 1383348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 12:17:55.710544 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-757245 minikube.k8s.io/updated_at=2025_12_17T12_17_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=old-k8s-version-757245 minikube.k8s.io/primary=true
	I1217 12:17:55.710547 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:55.829663 1383348 ops.go:34] apiserver oom_adj: -16
	I1217 12:17:55.980556 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:56.481124 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:56.981481 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:57.481367 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:57.981397 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:58.481229 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:58.981625 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:59.481232 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:59.980701 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:00.480773 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:03.155344 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 12:18:03.155378 1383836 machine.go:97] duration metric: took 6.71700129s to provisionDockerMachine
	I1217 12:18:03.155393 1383836 start.go:293] postStartSetup for "pause-137189" (driver="kvm2")
	I1217 12:18:03.155403 1383836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:18:03.155614 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:18:03.159779 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.160276 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.160325 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.160541 1383836 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/pause-137189/id_rsa Username:docker}
	I1217 12:18:03.248230 1383836 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:18:03.253798 1383836 info.go:137] Remote host: Buildroot 2025.02
	I1217 12:18:03.253824 1383836 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/addons for local assets ...
	I1217 12:18:03.253894 1383836 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/files for local assets ...
	I1217 12:18:03.253967 1383836 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem -> 13499072.pem in /etc/ssl/certs
	I1217 12:18:03.254079 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:18:03.268822 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:17:58.874437 1383625 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614" in container runtime
	I1217 12:17:58.874486 1383625 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.874539 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.874564 1383625 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1217 12:17:58.874617 1383625 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.874680 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.920462 1383625 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a" in container runtime
	I1217 12:17:58.920518 1383625 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.920570 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.943997 1383625 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1217 12:17:58.944029 1383625 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce" in container runtime
	I1217 12:17:58.944048 1383625 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.944048 1383625 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.944101 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.944101 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.950708 1383625 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1217 12:17:58.950770 1383625 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc" in container runtime
	I1217 12:17:58.950807 1383625 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:58.950867 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.950778 1383625 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 12:17:58.950884 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.950917 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.950975 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.951000 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.959240 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.959309 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.965102 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:59.053411 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:59.053437 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:59.053501 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 12:17:59.053531 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:59.073293 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:59.084247 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:59.084247 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:59.149442 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:59.176827 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:59.188951 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:59.188989 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 12:17:59.218135 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:59.218157 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:59.218196 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:59.269201 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 12:17:59.269346 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 12:17:59.269351 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 12:17:59.269443 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1217 12:17:59.323023 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 12:17:59.323038 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 12:17:59.323086 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1217 12:17:59.323150 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 12:17:59.323184 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1217 12:17:59.326311 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 12:17:59.326325 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 12:17:59.326371 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1217 12:17:59.326392 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (23144960 bytes)
	I1217 12:17:59.326409 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 12:17:59.326416 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1217 12:17:59.326411 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 12:17:59.326441 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1217 12:17:59.381928 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1217 12:17:59.381971 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1217 12:17:59.381932 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1217 12:17:59.382016 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1217 12:17:59.382049 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (25791488 bytes)
	I1217 12:17:59.382096 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 12:17:59.382094 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1217 12:17:59.382137 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (27697152 bytes)
	I1217 12:17:59.382056 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1217 12:17:59.382203 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (17248256 bytes)
	I1217 12:17:59.520670 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 12:17:59.520723 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1217 12:17:59.684132 1383625 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 12:17:59.684212 1383625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1217 12:17:59.770532 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:00.463462 1383625 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1217 12:18:00.463525 1383625 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:00.463593 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:18:00.463650 1383625 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1217 12:18:00.492057 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:00.619920 1383625 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 12:18:00.620019 1383625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 12:18:00.629754 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:01.071626 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:18:01.072470 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": dial tcp 192.168.61.103:8443: connect: connection refused
	I1217 12:18:01.072529 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:18:01.072582 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:18:01.111787 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:01.111817 1382780 cri.go:89] found id: ""
	I1217 12:18:01.111830 1382780 logs.go:282] 1 containers: [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f]
	I1217 12:18:01.111901 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.116131 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:18:01.116218 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:18:01.157535 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:01.157576 1382780 cri.go:89] found id: ""
	I1217 12:18:01.157588 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:18:01.157664 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.161897 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:18:01.161995 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:18:01.198853 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:01.198887 1382780 cri.go:89] found id: ""
	I1217 12:18:01.198902 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:18:01.199005 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.203667 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:18:01.203752 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:18:01.248264 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:01.248324 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:01.248336 1382780 cri.go:89] found id: ""
	I1217 12:18:01.248349 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:18:01.248445 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.253768 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.258732 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:18:01.258814 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:18:01.302721 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:01.302751 1382780 cri.go:89] found id: ""
	I1217 12:18:01.302764 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:18:01.302837 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.308464 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:18:01.308566 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:18:01.344888 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:01.344938 1382780 cri.go:89] found id: ""
	I1217 12:18:01.344960 1382780 logs.go:282] 1 containers: [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:18:01.345055 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.349136 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:18:01.349219 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:18:01.383714 1382780 cri.go:89] found id: ""
	I1217 12:18:01.383759 1382780 logs.go:282] 0 containers: []
	W1217 12:18:01.383774 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:18:01.383785 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:18:01.383881 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:18:01.419661 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:01.419698 1382780 cri.go:89] found id: ""
	I1217 12:18:01.419710 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:18:01.419786 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.424730 1382780 logs.go:123] Gathering logs for kubelet ...
	I1217 12:18:01.424763 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:18:01.527974 1382780 logs.go:123] Gathering logs for dmesg ...
	I1217 12:18:01.528032 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:18:01.548976 1382780 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:18:01.549049 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:18:01.643790 1382780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:18:01.643828 1382780 logs.go:123] Gathering logs for etcd [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9] ...
	I1217 12:18:01.643847 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:01.705381 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:18:01.705437 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:01.746724 1382780 logs.go:123] Gathering logs for kube-scheduler [93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29] ...
	I1217 12:18:01.746762 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:01.802110 1382780 logs.go:123] Gathering logs for kube-proxy [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8] ...
	I1217 12:18:01.802162 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:01.839865 1382780 logs.go:123] Gathering logs for CRI-O ...
	I1217 12:18:01.839913 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 12:18:02.176218 1382780 logs.go:123] Gathering logs for kube-apiserver [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f] ...
	I1217 12:18:02.176262 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:02.221839 1382780 logs.go:123] Gathering logs for kube-scheduler [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73] ...
	I1217 12:18:02.221888 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:02.304546 1382780 logs.go:123] Gathering logs for kube-controller-manager [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499] ...
	I1217 12:18:02.304603 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:02.359576 1382780 logs.go:123] Gathering logs for storage-provisioner [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90] ...
	I1217 12:18:02.359612 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:02.412316 1382780 logs.go:123] Gathering logs for container status ...
	I1217 12:18:02.412358 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:18:00.981322 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:01.481235 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:01.980738 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:02.481090 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:02.980673 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:03.481220 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:03.980726 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:04.480746 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:04.981699 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:05.480802 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:03.302774 1383836 start.go:296] duration metric: took 147.365771ms for postStartSetup
	I1217 12:18:03.302823 1383836 fix.go:56] duration metric: took 6.868936046s for fixHost
	I1217 12:18:03.306746 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.307312 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.307349 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.307620 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:18:03.307872 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:18:03.307886 1383836 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 12:18:03.416832 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765973883.412776808
	
	I1217 12:18:03.416864 1383836 fix.go:216] guest clock: 1765973883.412776808
	I1217 12:18:03.416874 1383836 fix.go:229] Guest: 2025-12-17 12:18:03.412776808 +0000 UTC Remote: 2025-12-17 12:18:03.302829513 +0000 UTC m=+25.082055048 (delta=109.947295ms)
	I1217 12:18:03.416896 1383836 fix.go:200] guest clock delta is within tolerance: 109.947295ms
	I1217 12:18:03.416903 1383836 start.go:83] releasing machines lock for "pause-137189", held for 6.983049517s
	I1217 12:18:03.420764 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.421324 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.421377 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.421651 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:18:03.421709 1383836 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:18:03.421722 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:18:03.421754 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:18:03.421787 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:18:03.421831 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:18:03.421903 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:18:03.422007 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:18:03.424970 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.425514 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.425547 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.425743 1383836 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/pause-137189/id_rsa Username:docker}
	I1217 12:18:03.530365 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:18:03.566487 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:18:03.605108 1383836 ssh_runner.go:195] Run: openssl version
	I1217 12:18:03.611717 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.624257 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:18:03.641103 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.646757 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.646831 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.654564 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:18:03.670747 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.688345 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:18:03.704482 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.710124 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.710213 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.717800 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:18:03.731082 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.749306 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:18:03.761700 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.767349 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.767419 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.774481 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:18:03.790903 1383836 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 12:18:03.796616 1383836 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 12:18:03.804031 1383836 ssh_runner.go:195] Run: cat /version.json
	I1217 12:18:03.804215 1383836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:18:03.839911 1383836 ssh_runner.go:195] Run: systemctl --version
	I1217 12:18:03.848178 1383836 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 12:18:03.999577 1383836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:18:04.009162 1383836 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:18:04.009277 1383836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:18:04.022017 1383836 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 12:18:04.022048 1383836 start.go:496] detecting cgroup driver to use...
	I1217 12:18:04.022156 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 12:18:04.048238 1383836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 12:18:04.070644 1383836 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:18:04.070717 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:18:04.092799 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:18:04.109622 1383836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:18:04.307257 1383836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:18:04.492787 1383836 docker.go:234] disabling docker service ...
	I1217 12:18:04.492894 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:18:04.524961 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:18:04.543840 1383836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:18:04.726189 1383836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:18:04.894624 1383836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:18:04.910539 1383836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:18:04.934048 1383836 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 12:18:04.934128 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.947224 1383836 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 12:18:04.947324 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.960156 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.974307 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.992701 1383836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:18:05.012211 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:05.030236 1383836 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:05.048278 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:05.066840 1383836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:18:05.082352 1383836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:18:05.103035 1383836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:18:05.654582 1383836 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 12:18:06.073917 1383836 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 12:18:06.074017 1383836 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 12:18:06.082385 1383836 start.go:564] Will wait 60s for crictl version
	I1217 12:18:06.082505 1383836 ssh_runner.go:195] Run: which crictl
	I1217 12:18:06.087517 1383836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 12:18:06.130064 1383836 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 12:18:06.130178 1383836 ssh_runner.go:195] Run: crio --version
	I1217 12:18:06.175515 1383836 ssh_runner.go:195] Run: crio --version
	I1217 12:18:06.429728 1383836 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 12:18:05.980897 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:06.480894 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:06.981232 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:07.480653 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:07.658671 1383348 kubeadm.go:1114] duration metric: took 11.948200598s to wait for elevateKubeSystemPrivileges
	I1217 12:18:07.658727 1383348 kubeadm.go:403] duration metric: took 24.530289091s to StartCluster
	I1217 12:18:07.658764 1383348 settings.go:142] acquiring lock: {Name:mkab196c8ac23f97b54763cecaa5ac5ac8f7dd0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:18:07.658892 1383348 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 12:18:07.660641 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/kubeconfig: {Name:mkf9f7ccd4382c7fd64f6772f4fae6c99a70cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:18:07.660994 1383348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 12:18:07.661014 1383348 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 12:18:07.661167 1383348 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:18:07.661271 1383348 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-757245"
	I1217 12:18:07.661293 1383348 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-757245"
	I1217 12:18:07.661315 1383348 config.go:182] Loaded profile config "old-k8s-version-757245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 12:18:07.661327 1383348 host.go:66] Checking if "old-k8s-version-757245" exists ...
	I1217 12:18:07.661375 1383348 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-757245"
	I1217 12:18:07.661394 1383348 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-757245"
	I1217 12:18:07.662598 1383348 out.go:179] * Verifying Kubernetes components...
	I1217 12:18:07.664036 1383348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:18:07.666087 1383348 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:06.435189 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:06.435820 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:06.435856 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:06.436177 1383836 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 12:18:06.446076 1383836 kubeadm.go:884] updating cluster {Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:18:06.446322 1383836 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 12:18:06.446414 1383836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:18:06.573055 1383836 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 12:18:06.573084 1383836 crio.go:433] Images already preloaded, skipping extraction
	I1217 12:18:06.573147 1383836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:18:06.693571 1383836 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 12:18:06.693599 1383836 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:18:06.693609 1383836 kubeadm.go:935] updating node { 192.168.39.45 8443 v1.34.3 crio true true} ...
	I1217 12:18:06.693749 1383836 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-137189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:18:06.693851 1383836 ssh_runner.go:195] Run: crio config
	I1217 12:18:06.770459 1383836 cni.go:84] Creating CNI manager for ""
	I1217 12:18:06.770543 1383836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:18:06.770571 1383836 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 12:18:06.770601 1383836 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.45 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-137189 NodeName:pause-137189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:18:06.770804 1383836 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-137189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.45"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.45"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:18:06.770892 1383836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 12:18:06.795556 1383836 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:18:06.795661 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:18:06.824438 1383836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1217 12:18:06.872321 1383836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 12:18:06.918865 1383836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1217 12:18:06.972804 1383836 ssh_runner.go:195] Run: grep 192.168.39.45	control-plane.minikube.internal$ /etc/hosts
	I1217 12:18:06.988931 1383836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:18:07.345330 1383836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:18:07.376414 1383836 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189 for IP: 192.168.39.45
	I1217 12:18:07.376445 1383836 certs.go:195] generating shared ca certs ...
	I1217 12:18:07.376468 1383836 certs.go:227] acquiring lock for ca certs: {Name:mk7dff4294abcbe4af041891799d61c459798c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:18:07.376687 1383836 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key
	I1217 12:18:07.376766 1383836 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key
	I1217 12:18:07.376780 1383836 certs.go:257] generating profile certs ...
	I1217 12:18:07.376898 1383836 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/client.key
	I1217 12:18:07.376994 1383836 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/apiserver.key.bd5945ce
	I1217 12:18:07.377059 1383836 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/proxy-client.key
	I1217 12:18:07.377235 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:18:07.377290 1383836 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:18:07.377300 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:18:07.377343 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:18:07.377382 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:18:07.377410 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:18:07.377467 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:18:07.378515 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:18:07.493330 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:18:07.574427 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:18:07.652304 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:18:07.713042 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 12:18:07.748572 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 12:18:07.821136 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:18:07.869252 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:18:07.927195 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:18:08.036351 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:18:08.131745 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:18:08.245162 1383836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:18:04.114238 1383625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (3.494184473s)
	I1217 12:18:04.114280 1383625 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1217 12:18:04.114317 1383625 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1217 12:18:04.114315 1383625 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.484520972s)
	I1217 12:18:04.114363 1383625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1217 12:18:04.114416 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:06.591626 1383625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (2.477220732s)
	I1217 12:18:06.591675 1383625 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1217 12:18:06.591707 1383625 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1217 12:18:06.591767 1383625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1217 12:18:06.591877 1383625 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.477446649s)
	I1217 12:18:06.591916 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 12:18:06.592035 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 12:18:04.964299 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:18:04.965062 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": dial tcp 192.168.61.103:8443: connect: connection refused
	I1217 12:18:04.965130 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:18:04.965204 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:18:05.016906 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:05.016954 1382780 cri.go:89] found id: ""
	I1217 12:18:05.016966 1382780 logs.go:282] 1 containers: [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f]
	I1217 12:18:05.017075 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.023591 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:18:05.023705 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:18:05.067785 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:05.067809 1382780 cri.go:89] found id: ""
	I1217 12:18:05.067820 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:18:05.067896 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.073889 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:18:05.073968 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:18:05.123697 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:05.123726 1382780 cri.go:89] found id: ""
	I1217 12:18:05.123738 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:18:05.123801 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.129487 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:18:05.129639 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:18:05.176993 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:05.177093 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:05.177106 1382780 cri.go:89] found id: ""
	I1217 12:18:05.177116 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:18:05.177276 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.182303 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.186955 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:18:05.187054 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:18:05.229968 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:05.230016 1382780 cri.go:89] found id: ""
	I1217 12:18:05.230029 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:18:05.230112 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.235045 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:18:05.235133 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:18:05.282958 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:05.282998 1382780 cri.go:89] found id: ""
	I1217 12:18:05.283008 1382780 logs.go:282] 1 containers: [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:18:05.283077 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.287873 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:18:05.288001 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:18:05.339719 1382780 cri.go:89] found id: ""
	I1217 12:18:05.339762 1382780 logs.go:282] 0 containers: []
	W1217 12:18:05.339774 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:18:05.339783 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:18:05.339891 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:18:05.387297 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:05.387329 1382780 cri.go:89] found id: ""
	I1217 12:18:05.387341 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:18:05.387432 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.392630 1382780 logs.go:123] Gathering logs for kubelet ...
	I1217 12:18:05.392668 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:18:05.541892 1382780 logs.go:123] Gathering logs for etcd [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9] ...
	I1217 12:18:05.541950 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:05.636280 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:18:05.636337 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:05.679509 1382780 logs.go:123] Gathering logs for kube-scheduler [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73] ...
	I1217 12:18:05.679552 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:05.770490 1382780 logs.go:123] Gathering logs for kube-scheduler [93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29] ...
	I1217 12:18:05.770566 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:05.835923 1382780 logs.go:123] Gathering logs for kube-controller-manager [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499] ...
	I1217 12:18:05.835969 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:05.888348 1382780 logs.go:123] Gathering logs for CRI-O ...
	I1217 12:18:05.888396 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 12:18:06.331090 1382780 logs.go:123] Gathering logs for dmesg ...
	I1217 12:18:06.331148 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:18:06.346192 1382780 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:18:06.346232 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:18:06.413958 1382780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:18:06.414009 1382780 logs.go:123] Gathering logs for kube-apiserver [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f] ...
	I1217 12:18:06.414029 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:06.490040 1382780 logs.go:123] Gathering logs for kube-proxy [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8] ...
	I1217 12:18:06.490098 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:06.553777 1382780 logs.go:123] Gathering logs for storage-provisioner [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90] ...
	I1217 12:18:06.553831 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:06.620455 1382780 logs.go:123] Gathering logs for container status ...
	I1217 12:18:06.620497 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:18:09.193061 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:18:09.193892 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": dial tcp 192.168.61.103:8443: connect: connection refused
	I1217 12:18:09.193967 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:18:09.194052 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:18:09.282268 1382780 cri.go:89] found id: "115868dfcf12d3df61d7ea2758ac63af46c98b1ceabde72a6d2221da0c4131f0"
	I1217 12:18:09.282297 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:09.282304 1382780 cri.go:89] found id: ""
	I1217 12:18:09.282375 1382780 logs.go:282] 2 containers: [115868dfcf12d3df61d7ea2758ac63af46c98b1ceabde72a6d2221da0c4131f0 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f]
	I1217 12:18:09.282454 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.288967 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.297409 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:18:09.297511 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:18:09.391374 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:09.391411 1382780 cri.go:89] found id: ""
	I1217 12:18:09.391424 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:18:09.391500 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.398369 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:18:09.398460 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:18:09.467444 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:09.467477 1382780 cri.go:89] found id: ""
	I1217 12:18:09.467489 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:18:09.467562 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.474936 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:18:09.475056 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:18:09.547478 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:09.547637 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:09.547652 1382780 cri.go:89] found id: ""
	I1217 12:18:09.547664 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:18:09.547782 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.554347 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.560316 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:18:09.560435 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:18:09.629195 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:09.629229 1382780 cri.go:89] found id: ""
	I1217 12:18:09.629241 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:18:09.629311 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.635327 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:18:09.635444 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:18:09.693332 1382780 cri.go:89] found id: "b854e6ee21b41e2e2c8cf047e543ca62d41eae440ebfacbd994c3da06e4d6cdb"
	I1217 12:18:09.693373 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:09.693382 1382780 cri.go:89] found id: ""
	I1217 12:18:09.693396 1382780 logs.go:282] 2 containers: [b854e6ee21b41e2e2c8cf047e543ca62d41eae440ebfacbd994c3da06e4d6cdb 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:18:09.693474 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.699407 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.705607 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:18:09.705688 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:18:09.782637 1382780 cri.go:89] found id: ""
	I1217 12:18:09.782672 1382780 logs.go:282] 0 containers: []
	W1217 12:18:09.782683 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:18:09.782691 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:18:09.782755 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:18:09.855610 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:09.855640 1382780 cri.go:89] found id: ""
	I1217 12:18:09.855665 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:18:09.855742 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.860699 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:18:09.860741 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:07.666742 1383348 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-757245"
	I1217 12:18:07.666788 1383348 host.go:66] Checking if "old-k8s-version-757245" exists ...
	I1217 12:18:07.667687 1383348 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:18:07.667708 1383348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:18:07.669927 1383348 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:18:07.669945 1383348 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:18:07.674265 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:18:07.675045 1383348 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:06:0d", ip: ""} in network mk-old-k8s-version-757245: {Iface:virbr5 ExpiryTime:2025-12-17 13:17:30 +0000 UTC Type:0 Mac:52:54:00:52:06:0d Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-757245 Clientid:01:52:54:00:52:06:0d}
	I1217 12:18:07.675081 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined IP address 192.168.83.245 and MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:18:07.675462 1383348 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/old-k8s-version-757245/id_rsa Username:docker}
	I1217 12:18:07.677587 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:18:07.678238 1383348 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:06:0d", ip: ""} in network mk-old-k8s-version-757245: {Iface:virbr5 ExpiryTime:2025-12-17 13:17:30 +0000 UTC Type:0 Mac:52:54:00:52:06:0d Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-757245 Clientid:01:52:54:00:52:06:0d}
	I1217 12:18:07.678293 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined IP address 192.168.83.245 and MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:18:07.678837 1383348 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/old-k8s-version-757245/id_rsa Username:docker}
	I1217 12:18:07.998245 1383348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 12:18:08.166998 1383348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:18:08.234779 1383348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:18:08.547033 1383348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:18:10.287454 1383348 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.289158633s)
	I1217 12:18:10.287496 1383348 start.go:1013] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1217 12:18:10.287523 1383348 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.120489043s)
	I1217 12:18:10.288832 1383348 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-757245" to be "Ready" ...
	I1217 12:18:10.304323 1383348 node_ready.go:49] node "old-k8s-version-757245" is "Ready"
	I1217 12:18:10.304371 1383348 node_ready.go:38] duration metric: took 15.506469ms for node "old-k8s-version-757245" to be "Ready" ...
	I1217 12:18:10.304394 1383348 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:18:10.304459 1383348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:18:10.606060 1383348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.058978898s)
	I1217 12:18:10.606137 1383348 api_server.go:72] duration metric: took 2.945074078s to wait for apiserver process to appear ...
	I1217 12:18:10.606160 1383348 api_server.go:88] waiting for apiserver healthz status ...
	I1217 12:18:10.606172 1383348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.371350146s)
	I1217 12:18:10.606184 1383348 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I1217 12:18:10.623619 1383348 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I1217 12:18:10.625457 1383348 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 12:18:10.625868 1383348 api_server.go:141] control plane version: v1.28.0
	I1217 12:18:10.625898 1383348 api_server.go:131] duration metric: took 19.72983ms to wait for apiserver health ...
	I1217 12:18:10.625910 1383348 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 12:18:10.626835 1383348 addons.go:530] duration metric: took 2.965673684s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 12:18:10.634773 1383348 system_pods.go:59] 8 kube-system pods found
	I1217 12:18:10.634809 1383348 system_pods.go:61] "coredns-5dd5756b68-92xws" [a72f93e8-c61d-4063-97df-aff70d878bb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:18:10.634817 1383348 system_pods.go:61] "coredns-5dd5756b68-m495h" [93a4c8ae-8fba-4ef9-addd-fdc5e9351a90] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:18:10.634823 1383348 system_pods.go:61] "etcd-old-k8s-version-757245" [970cfdfb-d063-480d-943a-ad81930ad464] Running
	I1217 12:18:10.634828 1383348 system_pods.go:61] "kube-apiserver-old-k8s-version-757245" [511ffe7e-87bf-48fa-9e58-a02f59d4fda2] Running
	I1217 12:18:10.634832 1383348 system_pods.go:61] "kube-controller-manager-old-k8s-version-757245" [0421248f-481b-4f89-a4fb-6a94a575fc25] Running
	I1217 12:18:10.634835 1383348 system_pods.go:61] "kube-proxy-mctv5" [9b2ead72-2de2-4ad7-82ed-724dfc3461c2] Running
	I1217 12:18:10.634839 1383348 system_pods.go:61] "kube-scheduler-old-k8s-version-757245" [e386f238-5485-4cbb-9564-03614c4207d5] Running
	I1217 12:18:10.634845 1383348 system_pods.go:61] "storage-provisioner" [e3cad041-88a1-4d0e-be11-7072c4e44ddf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:18:10.634853 1383348 system_pods.go:74] duration metric: took 8.936349ms to wait for pod list to return data ...
	I1217 12:18:10.634864 1383348 default_sa.go:34] waiting for default service account to be created ...
	I1217 12:18:10.637498 1383348 default_sa.go:45] found service account: "default"
	I1217 12:18:10.637523 1383348 default_sa.go:55] duration metric: took 2.648264ms for default service account to be created ...
	I1217 12:18:10.637535 1383348 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 12:18:10.641463 1383348 system_pods.go:86] 8 kube-system pods found
	I1217 12:18:10.641499 1383348 system_pods.go:89] "coredns-5dd5756b68-92xws" [a72f93e8-c61d-4063-97df-aff70d878bb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:18:10.641515 1383348 system_pods.go:89] "coredns-5dd5756b68-m495h" [93a4c8ae-8fba-4ef9-addd-fdc5e9351a90] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:18:10.641522 1383348 system_pods.go:89] "etcd-old-k8s-version-757245" [970cfdfb-d063-480d-943a-ad81930ad464] Running
	I1217 12:18:10.641529 1383348 system_pods.go:89] "kube-apiserver-old-k8s-version-757245" [511ffe7e-87bf-48fa-9e58-a02f59d4fda2] Running
	I1217 12:18:10.641535 1383348 system_pods.go:89] "kube-controller-manager-old-k8s-version-757245" [0421248f-481b-4f89-a4fb-6a94a575fc25] Running
	I1217 12:18:10.641549 1383348 system_pods.go:89] "kube-proxy-mctv5" [9b2ead72-2de2-4ad7-82ed-724dfc3461c2] Running
	I1217 12:18:10.641565 1383348 system_pods.go:89] "kube-scheduler-old-k8s-version-757245" [e386f238-5485-4cbb-9564-03614c4207d5] Running
	I1217 12:18:10.641576 1383348 system_pods.go:89] "storage-provisioner" [e3cad041-88a1-4d0e-be11-7072c4e44ddf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:18:10.641585 1383348 system_pods.go:126] duration metric: took 4.043826ms to wait for k8s-apps to be running ...
	I1217 12:18:10.641598 1383348 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 12:18:10.641659 1383348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 12:18:10.664683 1383348 system_svc.go:56] duration metric: took 23.073336ms WaitForService to wait for kubelet
	I1217 12:18:10.664719 1383348 kubeadm.go:587] duration metric: took 3.003658588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 12:18:10.664742 1383348 node_conditions.go:102] verifying NodePressure condition ...
	I1217 12:18:10.669436 1383348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 12:18:10.669475 1383348 node_conditions.go:123] node cpu capacity is 2
	I1217 12:18:10.669495 1383348 node_conditions.go:105] duration metric: took 4.746821ms to run NodePressure ...
	I1217 12:18:10.669510 1383348 start.go:242] waiting for startup goroutines ...
	I1217 12:18:08.294033 1383836 ssh_runner.go:195] Run: openssl version
	I1217 12:18:08.311655 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.345003 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:18:08.386518 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.399088 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.399191 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.416105 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:18:08.442880 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.476720 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:18:08.501370 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.511800 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.511899 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.524569 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:18:08.553109 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.572282 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:18:08.604798 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.616889 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.617016 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.630019 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:18:08.653536 1383836 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:18:08.666006 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 12:18:08.679610 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 12:18:08.693271 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 12:18:08.704893 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 12:18:08.713855 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 12:18:08.724142 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 12:18:08.735084 1383836 kubeadm.go:401] StartCluster: {Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:18:08.735292 1383836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 12:18:08.735358 1383836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:18:08.791645 1383836 cri.go:89] found id: "310d732afecf22f7a55f5b9312ad9e71118394ff09fc9f7d7c3eaf2de48cad02"
	I1217 12:18:08.791672 1383836 cri.go:89] found id: "d958a10e60bb18b7c6cfef7e922ec6c511df7903bff6d3fe4b2efb6fb756059c"
	I1217 12:18:08.791677 1383836 cri.go:89] found id: "1944d91c94e5183e69b38181a36718fe96c0be4386a877f00873165f1ee8b0b9"
	I1217 12:18:08.791699 1383836 cri.go:89] found id: "0b055307c937cef89a52e812a0b2a6ef7b83b6907d8c9cd10303092d207d0795"
	I1217 12:18:08.791703 1383836 cri.go:89] found id: "d3b342c3641fa821eadfb0cc69320076516baa945a7859a71b098f85087a5809"
	I1217 12:18:08.791709 1383836 cri.go:89] found id: "e1ade8faaa4b5b905c5a7436d0db742ad1837dde6e3fb0d4c61c936242632f16"
	I1217 12:18:08.791714 1383836 cri.go:89] found id: "0efd0e07325d21b417fc524dc11c66a45c3ed8db4fe88ebeed1de2dad9969f68"
	I1217 12:18:08.791718 1383836 cri.go:89] found id: "efc4e6ac4add4a3d2e1c7ae474271d1f76d922e4d443a1d8880e722d4469f383"
	I1217 12:18:08.791722 1383836 cri.go:89] found id: "166a9985e700638b97cb2541dc51b9d8a9c04973af2c6bedc9713270addf8697"
	I1217 12:18:08.791739 1383836 cri.go:89] found id: "119b3f1b9c1651145ae076affb70e219939b71e58a4f9e72b0af00646d803e4d"
	I1217 12:18:08.791752 1383836 cri.go:89] found id: "686717c825f6ddedcf110c0e997874c12e953f5c4803eccb336ff9aa50b1b3e1"
	I1217 12:18:08.791757 1383836 cri.go:89] found id: ""
	I1217 12:18:08.791821 1383836 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-137189 -n pause-137189
helpers_test.go:270: (dbg) Run:  kubectl --context pause-137189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-137189 -n pause-137189
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-137189 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-137189 logs -n 25: (1.455846314s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-470455 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                   │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo containerd config dump                                                                                                                                                                                                │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ ssh     │ -p cilium-470455 sudo crio config                                                                                                                                                                                                           │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │                     │
	│ delete  │ -p cilium-470455                                                                                                                                                                                                                            │ cilium-470455          │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │ 17 Dec 25 12:15 UTC │
	│ start   │ -p pause-137189 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                                     │ pause-137189           │ jenkins │ v1.37.0 │ 17 Dec 25 12:15 UTC │ 17 Dec 25 12:17 UTC │
	│ start   │ -p running-upgrade-616756 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                      │ running-upgrade-616756 │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │                     │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-630475 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                                                 │ stopped-upgrade-630475 │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │                     │
	│ delete  │ -p stopped-upgrade-630475                                                                                                                                                                                                                   │ stopped-upgrade-630475 │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │ 17 Dec 25 12:16 UTC │
	│ start   │ -p guest-887598 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-887598           │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │ 17 Dec 25 12:17 UTC │
	│ start   │ -p cert-expiration-026544 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                                     │ cert-expiration-026544 │ jenkins │ v1.37.0 │ 17 Dec 25 12:16 UTC │ 17 Dec 25 12:17 UTC │
	│ start   │ -p old-k8s-version-757245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-757245 │ jenkins │ v1.37.0 │ 17 Dec 25 12:17 UTC │                     │
	│ delete  │ -p cert-expiration-026544                                                                                                                                                                                                                   │ cert-expiration-026544 │ jenkins │ v1.37.0 │ 17 Dec 25 12:17 UTC │ 17 Dec 25 12:17 UTC │
	│ start   │ -p no-preload-837348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-837348      │ jenkins │ v1.37.0 │ 17 Dec 25 12:17 UTC │                     │
	│ start   │ -p pause-137189 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-137189           │ jenkins │ v1.37.0 │ 17 Dec 25 12:17 UTC │ 17 Dec 25 12:18 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 12:17:38
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 12:17:38.291714 1383836 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:17:38.291860 1383836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:17:38.291874 1383836 out.go:374] Setting ErrFile to fd 2...
	I1217 12:17:38.291880 1383836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:17:38.292244 1383836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 12:17:38.292821 1383836 out.go:368] Setting JSON to false
	I1217 12:17:38.295783 1383836 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21597,"bootTime":1765952261,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 12:17:38.295878 1383836 start.go:143] virtualization: kvm guest
	I1217 12:17:38.298768 1383836 out.go:179] * [pause-137189] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 12:17:38.300531 1383836 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 12:17:38.300544 1383836 notify.go:221] Checking for updates...
	I1217 12:17:38.302907 1383836 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:17:38.304064 1383836 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 12:17:38.305165 1383836 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 12:17:38.306159 1383836 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 12:17:38.307178 1383836 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:17:38.308796 1383836 config.go:182] Loaded profile config "pause-137189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:17:38.309537 1383836 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:17:38.359896 1383836 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 12:17:38.360896 1383836 start.go:309] selected driver: kvm2
	I1217 12:17:38.360925 1383836 start.go:927] validating driver "kvm2" against &{Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.3 ClusterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:17:38.361116 1383836 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:17:38.362349 1383836 cni.go:84] Creating CNI manager for ""
	I1217 12:17:38.362436 1383836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:17:38.362517 1383836 start.go:353] cluster config:
	{Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:pause-137189 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:17:38.362700 1383836 iso.go:125] acquiring lock: {Name:mkf3f94e126ae38d32753ef0086ea24e79e9b483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 12:17:38.364261 1383836 out.go:179] * Starting "pause-137189" primary control-plane node in "pause-137189" cluster
	I1217 12:17:35.757546 1383625 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1217 12:17:35.757825 1383625 start.go:159] libmachine.API.Create for "no-preload-837348" (driver="kvm2")
	I1217 12:17:35.757864 1383625 client.go:173] LocalClient.Create starting
	I1217 12:17:35.757928 1383625 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem
	I1217 12:17:35.757966 1383625 main.go:143] libmachine: Decoding PEM data...
	I1217 12:17:35.758013 1383625 main.go:143] libmachine: Parsing certificate...
	I1217 12:17:35.758081 1383625 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem
	I1217 12:17:35.758108 1383625 main.go:143] libmachine: Decoding PEM data...
	I1217 12:17:35.758125 1383625 main.go:143] libmachine: Parsing certificate...
	I1217 12:17:35.758551 1383625 main.go:143] libmachine: creating domain...
	I1217 12:17:35.758560 1383625 main.go:143] libmachine: creating network...
	I1217 12:17:35.760052 1383625 main.go:143] libmachine: found existing default network
	I1217 12:17:35.760388 1383625 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 12:17:35.761487 1383625 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:63:af:da} reservation:<nil>}
	I1217 12:17:35.762478 1383625 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c0dc30}
	I1217 12:17:35.762581 1383625 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-no-preload-837348</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 12:17:35.769459 1383625 main.go:143] libmachine: creating private network mk-no-preload-837348 192.168.50.0/24...
	I1217 12:17:35.859849 1383625 main.go:143] libmachine: private network mk-no-preload-837348 192.168.50.0/24 created
	I1217 12:17:35.860234 1383625 main.go:143] libmachine: <network>
	  <name>mk-no-preload-837348</name>
	  <uuid>40cee4c2-980a-47c7-9a34-797d661c24bf</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:55:e6:ca'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 12:17:35.860271 1383625 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348 ...
	I1217 12:17:35.860293 1383625 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21808-1345916/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 12:17:35.860305 1383625 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 12:17:35.860374 1383625 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21808-1345916/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
	I1217 12:17:36.191158 1383625 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/id_rsa...
	I1217 12:17:36.262155 1383625 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/no-preload-837348.rawdisk...
	I1217 12:17:36.262207 1383625 main.go:143] libmachine: Writing magic tar header
	I1217 12:17:36.262234 1383625 main.go:143] libmachine: Writing SSH key tar header
	I1217 12:17:36.262318 1383625 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348 ...
	I1217 12:17:36.262379 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348
	I1217 12:17:36.262413 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348 (perms=drwx------)
	I1217 12:17:36.262433 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines
	I1217 12:17:36.262446 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube/machines (perms=drwxr-xr-x)
	I1217 12:17:36.262457 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 12:17:36.262469 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916/.minikube (perms=drwxr-xr-x)
	I1217 12:17:36.262481 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21808-1345916
	I1217 12:17:36.262494 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21808-1345916 (perms=drwxrwxr-x)
	I1217 12:17:36.262510 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 12:17:36.262523 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 12:17:36.262539 1383625 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 12:17:36.262553 1383625 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 12:17:36.262562 1383625 main.go:143] libmachine: checking permissions on dir: /home
	I1217 12:17:36.262571 1383625 main.go:143] libmachine: skipping /home - not owner
	I1217 12:17:36.262576 1383625 main.go:143] libmachine: defining domain...
	I1217 12:17:36.263951 1383625 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>no-preload-837348</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/no-preload-837348.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-no-preload-837348'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 12:17:36.269022 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:bd:30:b6 in network default
	I1217 12:17:36.269812 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:36.269834 1383625 main.go:143] libmachine: starting domain...
	I1217 12:17:36.269839 1383625 main.go:143] libmachine: ensuring networks are active...
	I1217 12:17:36.270730 1383625 main.go:143] libmachine: Ensuring network default is active
	I1217 12:17:36.271183 1383625 main.go:143] libmachine: Ensuring network mk-no-preload-837348 is active
	I1217 12:17:36.271861 1383625 main.go:143] libmachine: getting domain XML...
	I1217 12:17:36.273322 1383625 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>no-preload-837348</name>
	  <uuid>deac9a7a-ba38-47f4-bf03-931dc4f036c8</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/no-preload-837348.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:3f:19:62'/>
	      <source network='mk-no-preload-837348'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:bd:30:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 12:17:37.676295 1383625 main.go:143] libmachine: waiting for domain to start...
	I1217 12:17:37.677742 1383625 main.go:143] libmachine: domain is now running
	I1217 12:17:37.677758 1383625 main.go:143] libmachine: waiting for IP...
	I1217 12:17:37.678535 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:37.679152 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:37.679166 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:37.679623 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:37.679660 1383625 retry.go:31] will retry after 282.084865ms: waiting for domain to come up
	I1217 12:17:37.963376 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:37.964438 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:37.964461 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:37.964948 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:37.965001 1383625 retry.go:31] will retry after 316.960465ms: waiting for domain to come up
	I1217 12:17:38.283838 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:38.284796 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:38.284841 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:38.285384 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:38.285445 1383625 retry.go:31] will retry after 315.128777ms: waiting for domain to come up
	I1217 12:17:38.602264 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:38.603247 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:38.603271 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:38.603814 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:38.603865 1383625 retry.go:31] will retry after 398.048219ms: waiting for domain to come up
	I1217 12:17:35.668306 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 12:17:35.668347 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:17:37.890637 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:17:37.891237 1383348 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:06:0d", ip: ""} in network mk-old-k8s-version-757245: {Iface:virbr5 ExpiryTime:2025-12-17 13:17:30 +0000 UTC Type:0 Mac:52:54:00:52:06:0d Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-757245 Clientid:01:52:54:00:52:06:0d}
	I1217 12:17:37.891285 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined IP address 192.168.83.245 and MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:17:37.891530 1383348 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1217 12:17:37.896342 1383348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:17:37.912942 1383348 kubeadm.go:884] updating cluster {Name:old-k8s-version-757245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.0 ClusterName:old-k8s-version-757245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:17:37.913097 1383348 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 12:17:37.913168 1383348 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:17:37.947327 1383348 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0". assuming images are not preloaded.
	I1217 12:17:37.947418 1383348 ssh_runner.go:195] Run: which lz4
	I1217 12:17:37.952849 1383348 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 12:17:37.958377 1383348 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 12:17:37.958416 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457056555 bytes)
	I1217 12:17:39.697301 1383348 crio.go:462] duration metric: took 1.744496709s to copy over tarball
	I1217 12:17:39.697398 1383348 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 12:17:38.365471 1383836 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 12:17:38.365522 1383836 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 12:17:38.365535 1383836 cache.go:65] Caching tarball of preloaded images
	I1217 12:17:38.365658 1383836 preload.go:238] Found /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 12:17:38.365674 1383836 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 12:17:38.365841 1383836 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/config.json ...
	I1217 12:17:38.366165 1383836 start.go:360] acquireMachinesLock for pause-137189: {Name:mk7c4b33009a84629d0b15fa1b2a158ad55cf3fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 12:17:41.616441 1383348 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.918997848s)
	I1217 12:17:41.616497 1383348 crio.go:469] duration metric: took 1.919161859s to extract the tarball
	I1217 12:17:41.616510 1383348 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 12:17:41.665251 1383348 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:17:41.709178 1383348 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 12:17:41.709203 1383348 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:17:41.709212 1383348 kubeadm.go:935] updating node { 192.168.83.245 8443 v1.28.0 crio true true} ...
	I1217 12:17:41.709306 1383348 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-757245 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-757245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:17:41.709376 1383348 ssh_runner.go:195] Run: crio config
	I1217 12:17:41.758504 1383348 cni.go:84] Creating CNI manager for ""
	I1217 12:17:41.758529 1383348 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:17:41.758551 1383348 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 12:17:41.758572 1383348 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.245 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-757245 NodeName:old-k8s-version-757245 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.245"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.245 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:17:41.758717 1383348 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.245
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-757245"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.245
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.245"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:17:41.758784 1383348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 12:17:41.772626 1383348 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:17:41.772703 1383348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:17:41.784677 1383348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1217 12:17:41.808834 1383348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 12:17:41.829328 1383348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I1217 12:17:41.850704 1383348 ssh_runner.go:195] Run: grep 192.168.83.245	control-plane.minikube.internal$ /etc/hosts
	I1217 12:17:41.855300 1383348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.245	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:17:41.870977 1383348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:17:42.013537 1383348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:17:42.034456 1383348 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245 for IP: 192.168.83.245
	I1217 12:17:42.034488 1383348 certs.go:195] generating shared ca certs ...
	I1217 12:17:42.034512 1383348 certs.go:227] acquiring lock for ca certs: {Name:mk7dff4294abcbe4af041891799d61c459798c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.034724 1383348 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key
	I1217 12:17:42.034862 1383348 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key
	I1217 12:17:42.034883 1383348 certs.go:257] generating profile certs ...
	I1217 12:17:42.034967 1383348 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.key
	I1217 12:17:42.035000 1383348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt with IP's: []
	I1217 12:17:42.248648 1383348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt ...
	I1217 12:17:42.248685 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: {Name:mkd4f6188837982d0a0dc17d03070915a2e288df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.248897 1383348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.key ...
	I1217 12:17:42.248917 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.key: {Name:mk437fbf23952f2cba414b4b2fe12f437c02d18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.249044 1383348 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key.b76b40b8
	I1217 12:17:42.249067 1383348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt.b76b40b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.245]
	I1217 12:17:42.326760 1383348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt.b76b40b8 ...
	I1217 12:17:42.326793 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt.b76b40b8: {Name:mkee1f449a6ebd5a3b2ca2b0ba6d404a247a5806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.327019 1383348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key.b76b40b8 ...
	I1217 12:17:42.327040 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key.b76b40b8: {Name:mk39ae6a44c5222412802219a2fbebaf741d5553 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.327164 1383348 certs.go:382] copying /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt.b76b40b8 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt
	I1217 12:17:42.327272 1383348 certs.go:386] copying /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key.b76b40b8 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key
	I1217 12:17:42.327358 1383348 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.key
	I1217 12:17:42.327382 1383348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.crt with IP's: []
	I1217 12:17:42.568463 1383348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.crt ...
	I1217 12:17:42.568498 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.crt: {Name:mkcae9c3f6f633131d4dfe9c099eb0ae0021cbe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.568695 1383348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.key ...
	I1217 12:17:42.568713 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.key: {Name:mk23db6e8c67ed2e7d3383233aae3724d08bc9dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:17:42.568914 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:17:42.568977 1383348 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:17:42.569008 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:17:42.569052 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:17:42.569092 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:17:42.569128 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:17:42.569197 1383348 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:17:42.569799 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:17:42.599975 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:17:42.628365 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:17:42.656627 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:17:42.685196 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 12:17:42.717050 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 12:17:42.752543 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:17:42.796821 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:17:42.838073 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:17:42.870896 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:17:42.903925 1383348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:17:42.937970 1383348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:17:42.963537 1383348 ssh_runner.go:195] Run: openssl version
	I1217 12:17:42.970280 1383348 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:17:42.982797 1383348 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:17:42.995525 1383348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:17:43.001066 1383348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:17:43.001139 1383348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:17:43.008969 1383348 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:17:43.020905 1383348 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:17:43.033842 1383348 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:17:43.046660 1383348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:17:43.051905 1383348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:17:43.051988 1383348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:17:43.059514 1383348 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:17:43.072811 1383348 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:43.084493 1383348 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:17:43.096012 1383348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:43.101487 1383348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:43.101564 1383348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:43.109092 1383348 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:17:43.122596 1383348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:17:43.128369 1383348 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 12:17:43.128442 1383348 kubeadm.go:401] StartCluster: {Name:old-k8s-version-757245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.0 ClusterName:old-k8s-version-757245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:17:43.128540 1383348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 12:17:43.128626 1383348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:17:43.171963 1383348 cri.go:89] found id: ""
	I1217 12:17:43.172051 1383348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 12:17:43.188409 1383348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 12:17:43.201853 1383348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 12:17:43.214183 1383348 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 12:17:43.214215 1383348 kubeadm.go:158] found existing configuration files:
	
	I1217 12:17:43.214278 1383348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 12:17:43.228732 1383348 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 12:17:43.228802 1383348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 12:17:43.244518 1383348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 12:17:43.259301 1383348 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 12:17:43.259367 1383348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 12:17:43.271757 1383348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 12:17:43.284348 1383348 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 12:17:43.284425 1383348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 12:17:43.297181 1383348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 12:17:43.308716 1383348 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 12:17:43.308803 1383348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 12:17:43.321409 1383348 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 12:17:43.386204 1383348 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1217 12:17:43.386313 1383348 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 12:17:43.520121 1383348 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 12:17:43.520282 1383348 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 12:17:43.520411 1383348 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1217 12:17:43.714647 1383348 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 12:17:39.003650 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:39.004580 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:39.004604 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:39.005155 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:39.005205 1383625 retry.go:31] will retry after 748.235257ms: waiting for domain to come up
	I1217 12:17:39.755421 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:39.756285 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:39.756312 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:39.756774 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:39.756821 1383625 retry.go:31] will retry after 860.765677ms: waiting for domain to come up
	I1217 12:17:40.619481 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:40.622585 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:40.622612 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:40.623106 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:40.623161 1383625 retry.go:31] will retry after 1.141529292s: waiting for domain to come up
	I1217 12:17:41.766036 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:41.766884 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:41.766905 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:41.767454 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:41.767498 1383625 retry.go:31] will retry after 1.422452711s: waiting for domain to come up
	I1217 12:17:43.192374 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:43.193156 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:43.193175 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:43.193586 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:43.193633 1383625 retry.go:31] will retry after 1.200790035s: waiting for domain to come up
	I1217 12:17:40.669170 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 12:17:40.669227 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:17:43.738482 1383348 out.go:252]   - Generating certificates and keys ...
	I1217 12:17:43.738655 1383348 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 12:17:43.738795 1383348 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 12:17:43.834748 1383348 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 12:17:44.137451 1383348 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 12:17:44.269800 1383348 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 12:17:44.562263 1383348 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 12:17:44.764289 1383348 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 12:17:44.764517 1383348 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-757245] and IPs [192.168.83.245 127.0.0.1 ::1]
	I1217 12:17:44.887414 1383348 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 12:17:44.887640 1383348 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-757245] and IPs [192.168.83.245 127.0.0.1 ::1]
	I1217 12:17:45.088229 1383348 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 12:17:45.280777 1383348 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 12:17:45.332096 1383348 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 12:17:45.333532 1383348 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 12:17:45.589279 1383348 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 12:17:45.844463 1383348 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 12:17:46.135827 1383348 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 12:17:46.372629 1383348 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 12:17:46.373466 1383348 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 12:17:46.375898 1383348 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 12:17:44.396269 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:44.396921 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:44.396937 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:44.397333 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:44.397373 1383625 retry.go:31] will retry after 1.789377224s: waiting for domain to come up
	I1217 12:17:46.189513 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:46.190539 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:46.190563 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:46.191093 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:46.191143 1383625 retry.go:31] will retry after 2.694089109s: waiting for domain to come up
	I1217 12:17:45.669529 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 12:17:45.669655 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:17:45.669761 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:17:45.728593 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:17:45.728624 1382780 cri.go:89] found id: "f248716c8d0653839148748f3815f2a17947ef670332d4fd614b34a6b1ea84d9"
	I1217 12:17:45.728635 1382780 cri.go:89] found id: ""
	I1217 12:17:45.728660 1382780 logs.go:282] 2 containers: [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f f248716c8d0653839148748f3815f2a17947ef670332d4fd614b34a6b1ea84d9]
	I1217 12:17:45.728736 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.734098 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.738648 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:17:45.738764 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:17:45.785344 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:17:45.785372 1382780 cri.go:89] found id: ""
	I1217 12:17:45.785383 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:17:45.785462 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.789841 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:17:45.789925 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:17:45.836793 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:17:45.836822 1382780 cri.go:89] found id: ""
	I1217 12:17:45.836833 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:17:45.836923 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.842997 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:17:45.843088 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:17:45.893333 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:17:45.893368 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:17:45.893374 1382780 cri.go:89] found id: ""
	I1217 12:17:45.893385 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:17:45.893468 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.898140 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.903766 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:17:45.903864 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:17:45.945304 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:17:45.945339 1382780 cri.go:89] found id: ""
	I1217 12:17:45.945354 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:17:45.945435 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.950875 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:17:45.950959 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:17:45.985484 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:17:45.985512 1382780 cri.go:89] found id: ""
	I1217 12:17:45.985522 1382780 logs.go:282] 1 containers: [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:17:45.985589 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:45.990036 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:17:45.990146 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:17:46.033514 1382780 cri.go:89] found id: ""
	I1217 12:17:46.033550 1382780 logs.go:282] 0 containers: []
	W1217 12:17:46.033563 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:17:46.033572 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:17:46.033646 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:17:46.072220 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:17:46.072256 1382780 cri.go:89] found id: ""
	I1217 12:17:46.072273 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:17:46.072348 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:46.077833 1382780 logs.go:123] Gathering logs for kubelet ...
	I1217 12:17:46.077865 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 12:17:46.109790 1382780 logs.go:138] Found kubelet problem: Dec 17 12:16:16 running-upgrade-616756 kubelet[1243]: W1217 12:16:16.318011    1243 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-616756" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-616756' and this object
	W1217 12:17:46.110120 1382780 logs.go:138] Found kubelet problem: Dec 17 12:16:16 running-upgrade-616756 kubelet[1243]: E1217 12:16:16.318117    1243 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:running-upgrade-616756\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-616756' and this object" logger="UnhandledError"
	I1217 12:17:46.189112 1382780 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:17:46.189158 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:17:46.281432 1382780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:17:46.281461 1382780 logs.go:123] Gathering logs for kube-apiserver [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f] ...
	I1217 12:17:46.281481 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:17:46.323528 1382780 logs.go:123] Gathering logs for kube-apiserver [f248716c8d0653839148748f3815f2a17947ef670332d4fd614b34a6b1ea84d9] ...
	I1217 12:17:46.323571 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f248716c8d0653839148748f3815f2a17947ef670332d4fd614b34a6b1ea84d9"
	I1217 12:17:46.372689 1382780 logs.go:123] Gathering logs for etcd [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9] ...
	I1217 12:17:46.372723 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:17:46.420134 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:17:46.420179 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:17:46.455695 1382780 logs.go:123] Gathering logs for kube-scheduler [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73] ...
	I1217 12:17:46.455745 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:17:46.548793 1382780 logs.go:123] Gathering logs for kube-scheduler [93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29] ...
	I1217 12:17:46.548836 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:17:46.602964 1382780 logs.go:123] Gathering logs for dmesg ...
	I1217 12:17:46.603027 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:17:46.630074 1382780 logs.go:123] Gathering logs for kube-proxy [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8] ...
	I1217 12:17:46.630131 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:17:46.673440 1382780 logs.go:123] Gathering logs for kube-controller-manager [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499] ...
	I1217 12:17:46.673471 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:17:46.719293 1382780 logs.go:123] Gathering logs for storage-provisioner [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90] ...
	I1217 12:17:46.719341 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:17:46.769022 1382780 logs.go:123] Gathering logs for CRI-O ...
	I1217 12:17:46.769063 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 12:17:47.191306 1382780 logs.go:123] Gathering logs for container status ...
	I1217 12:17:47.191349 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:17:47.248568 1382780 out.go:374] Setting ErrFile to fd 2...
	I1217 12:17:47.248608 1382780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1217 12:17:47.248676 1382780 out.go:285] X Problems detected in kubelet:
	W1217 12:17:47.248700 1382780 out.go:285]   Dec 17 12:16:16 running-upgrade-616756 kubelet[1243]: W1217 12:16:16.318011    1243 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-616756" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-616756' and this object
	W1217 12:17:47.248713 1382780 out.go:285]   Dec 17 12:16:16 running-upgrade-616756 kubelet[1243]: E1217 12:16:16.318117    1243 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:running-upgrade-616756\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'running-upgrade-616756' and this object" logger="UnhandledError"
	I1217 12:17:47.248723 1382780 out.go:374] Setting ErrFile to fd 2...
	I1217 12:17:47.248730 1382780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:17:46.379516 1383348 out.go:252]   - Booting up control plane ...
	I1217 12:17:46.379640 1383348 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 12:17:46.379756 1383348 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 12:17:46.380663 1383348 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 12:17:46.405815 1383348 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 12:17:46.407103 1383348 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 12:17:46.407216 1383348 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 12:17:46.638633 1383348 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1217 12:17:48.887414 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:48.888272 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:48.888296 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:48.888733 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:48.888778 1383625 retry.go:31] will retry after 2.517738762s: waiting for domain to come up
	I1217 12:17:51.409568 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:51.410317 1383625 main.go:143] libmachine: no network interface addresses found for domain no-preload-837348 (source=lease)
	I1217 12:17:51.410343 1383625 main.go:143] libmachine: trying to list again with source=arp
	I1217 12:17:51.410728 1383625 main.go:143] libmachine: unable to find current IP address of domain no-preload-837348 in network mk-no-preload-837348 (interfaces detected: [])
	I1217 12:17:51.410776 1383625 retry.go:31] will retry after 3.467213061s: waiting for domain to come up
	I1217 12:17:54.136069 1383348 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.503805 seconds
	I1217 12:17:54.136296 1383348 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 12:17:54.152520 1383348 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 12:17:54.682315 1383348 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 12:17:54.682639 1383348 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-757245 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 12:17:55.198590 1383348 kubeadm.go:319] [bootstrap-token] Using token: 8niwn2.dshdvdj7hppgjh3a
	I1217 12:17:55.199939 1383348 out.go:252]   - Configuring RBAC rules ...
	I1217 12:17:55.200090 1383348 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 12:17:55.207224 1383348 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 12:17:55.218315 1383348 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 12:17:55.222532 1383348 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 12:17:55.229760 1383348 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 12:17:55.233703 1383348 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 12:17:55.250630 1383348 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 12:17:55.542693 1383348 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 12:17:55.619965 1383348 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 12:17:55.624871 1383348 kubeadm.go:319] 
	I1217 12:17:55.624974 1383348 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 12:17:55.625029 1383348 kubeadm.go:319] 
	I1217 12:17:55.625138 1383348 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 12:17:55.625155 1383348 kubeadm.go:319] 
	I1217 12:17:55.625196 1383348 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 12:17:55.625310 1383348 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 12:17:55.625407 1383348 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 12:17:55.625422 1383348 kubeadm.go:319] 
	I1217 12:17:55.625507 1383348 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 12:17:55.625518 1383348 kubeadm.go:319] 
	I1217 12:17:55.625623 1383348 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 12:17:55.625639 1383348 kubeadm.go:319] 
	I1217 12:17:55.625714 1383348 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 12:17:55.625821 1383348 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 12:17:55.625928 1383348 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 12:17:55.625939 1383348 kubeadm.go:319] 
	I1217 12:17:55.626818 1383348 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 12:17:55.626946 1383348 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 12:17:55.626963 1383348 kubeadm.go:319] 
	I1217 12:17:55.627080 1383348 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8niwn2.dshdvdj7hppgjh3a \
	I1217 12:17:55.627229 1383348 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:03d71c4919b4a2b722377932ade21f7a19ec06bb9a5b5ca567ebf14ade8ad6b0 \
	I1217 12:17:55.627258 1383348 kubeadm.go:319] 	--control-plane 
	I1217 12:17:55.627268 1383348 kubeadm.go:319] 
	I1217 12:17:55.627382 1383348 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 12:17:55.627395 1383348 kubeadm.go:319] 
	I1217 12:17:55.627503 1383348 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8niwn2.dshdvdj7hppgjh3a \
	I1217 12:17:55.627632 1383348 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:03d71c4919b4a2b722377932ade21f7a19ec06bb9a5b5ca567ebf14ade8ad6b0 
	I1217 12:17:55.628693 1383348 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 12:17:55.628736 1383348 cni.go:84] Creating CNI manager for ""
	I1217 12:17:55.628754 1383348 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:17:55.631268 1383348 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 12:17:55.632385 1383348 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 12:17:55.652741 1383348 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 12:17:56.433822 1383836 start.go:364] duration metric: took 18.06760151s to acquireMachinesLock for "pause-137189"
	I1217 12:17:56.433878 1383836 start.go:96] Skipping create...Using existing machine configuration
	I1217 12:17:56.433887 1383836 fix.go:54] fixHost starting: 
	I1217 12:17:56.436602 1383836 fix.go:112] recreateIfNeeded on pause-137189: state=Running err=<nil>
	W1217 12:17:56.436647 1383836 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 12:17:56.438325 1383836 out.go:252] * Updating the running kvm2 "pause-137189" VM ...
	I1217 12:17:56.438363 1383836 machine.go:94] provisionDockerMachine start ...
	I1217 12:17:56.441386 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.441852 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.441883 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.442163 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.442430 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:56.442447 1383836 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:17:56.550048 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137189
	
	I1217 12:17:56.550079 1383836 buildroot.go:166] provisioning hostname "pause-137189"
	I1217 12:17:56.553625 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.554109 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.554146 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.554420 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.554672 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:56.554687 1383836 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-137189 && echo "pause-137189" | sudo tee /etc/hostname
	I1217 12:17:56.683365 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-137189
	
	I1217 12:17:56.686773 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.687276 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.687311 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.687522 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.687787 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:56.687814 1383836 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-137189' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-137189/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-137189' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:17:56.793486 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:17:56.793528 1383836 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1345916/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1345916/.minikube}
	I1217 12:17:56.793601 1383836 buildroot.go:174] setting up certificates
	I1217 12:17:56.793615 1383836 provision.go:84] configureAuth start
	I1217 12:17:56.797345 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.797871 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.797907 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.801679 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.802181 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.802228 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.802432 1383836 provision.go:143] copyHostCerts
	I1217 12:17:56.802519 1383836 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem, removing ...
	I1217 12:17:56.802537 1383836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem
	I1217 12:17:56.802624 1383836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem (1082 bytes)
	I1217 12:17:56.802821 1383836 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem, removing ...
	I1217 12:17:56.802838 1383836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem
	I1217 12:17:56.802887 1383836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem (1123 bytes)
	I1217 12:17:56.803155 1383836 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem, removing ...
	I1217 12:17:56.803174 1383836 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem
	I1217 12:17:56.803217 1383836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem (1675 bytes)
	I1217 12:17:56.803310 1383836 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem org=jenkins.pause-137189 san=[127.0.0.1 192.168.39.45 localhost minikube pause-137189]
	I1217 12:17:56.918738 1383836 provision.go:177] copyRemoteCerts
	I1217 12:17:56.918800 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:17:56.922240 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.922715 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:56.922746 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:56.922948 1383836 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/pause-137189/id_rsa Username:docker}
	I1217 12:17:57.008478 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:17:57.045415 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 12:17:57.085019 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 12:17:57.126836 1383836 provision.go:87] duration metric: took 333.203489ms to configureAuth
	I1217 12:17:57.126872 1383836 buildroot.go:189] setting minikube options for container-runtime
	I1217 12:17:57.127154 1383836 config.go:182] Loaded profile config "pause-137189": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:17:57.130473 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:57.131084 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:17:57.131123 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:17:57.131334 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:57.131639 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:17:57.131668 1383836 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 12:17:54.879253 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:54.879955 1383625 main.go:143] libmachine: domain no-preload-837348 has current primary IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:54.879976 1383625 main.go:143] libmachine: found domain IP: 192.168.50.4
	I1217 12:17:54.880000 1383625 main.go:143] libmachine: reserving static IP address...
	I1217 12:17:54.880455 1383625 main.go:143] libmachine: unable to find host DHCP lease matching {name: "no-preload-837348", mac: "52:54:00:3f:19:62", ip: "192.168.50.4"} in network mk-no-preload-837348
	I1217 12:17:55.113103 1383625 main.go:143] libmachine: reserved static IP address 192.168.50.4 for domain no-preload-837348
	I1217 12:17:55.113135 1383625 main.go:143] libmachine: waiting for SSH...
	I1217 12:17:55.113144 1383625 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 12:17:55.117017 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.117590 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.117625 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.117828 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.118084 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.118096 1383625 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 12:17:55.225035 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:17:55.225489 1383625 main.go:143] libmachine: domain creation complete
	I1217 12:17:55.227467 1383625 machine.go:94] provisionDockerMachine start ...
	I1217 12:17:55.230655 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.231189 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.231228 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.231448 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.231647 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.231657 1383625 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 12:17:55.344072 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 12:17:55.344104 1383625 buildroot.go:166] provisioning hostname "no-preload-837348"
	I1217 12:17:55.347605 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.348186 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.348236 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.348609 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.348912 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.348930 1383625 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-837348 && echo "no-preload-837348" | sudo tee /etc/hostname
	I1217 12:17:55.479766 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-837348
	
	I1217 12:17:55.483549 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.484128 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.484179 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.484449 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.484709 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.484728 1383625 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-837348' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-837348/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-837348' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 12:17:55.603469 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 12:17:55.603504 1383625 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21808-1345916/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-1345916/.minikube}
	I1217 12:17:55.603527 1383625 buildroot.go:174] setting up certificates
	I1217 12:17:55.603539 1383625 provision.go:84] configureAuth start
	I1217 12:17:55.606816 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.607497 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.607538 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.610540 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.611022 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.611054 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.611246 1383625 provision.go:143] copyHostCerts
	I1217 12:17:55.611334 1383625 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem, removing ...
	I1217 12:17:55.611349 1383625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem
	I1217 12:17:55.611430 1383625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.pem (1082 bytes)
	I1217 12:17:55.611581 1383625 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem, removing ...
	I1217 12:17:55.611592 1383625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem
	I1217 12:17:55.611625 1383625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/cert.pem (1123 bytes)
	I1217 12:17:55.611690 1383625 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem, removing ...
	I1217 12:17:55.611697 1383625 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem
	I1217 12:17:55.611721 1383625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-1345916/.minikube/key.pem (1675 bytes)
	I1217 12:17:55.611769 1383625 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem org=jenkins.no-preload-837348 san=[127.0.0.1 192.168.50.4 localhost minikube no-preload-837348]
	I1217 12:17:55.699239 1383625 provision.go:177] copyRemoteCerts
	I1217 12:17:55.699302 1383625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 12:17:55.702201 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.702647 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.702684 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.702854 1383625 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/id_rsa Username:docker}
	I1217 12:17:55.790502 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 12:17:55.831924 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 12:17:55.874154 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 12:17:55.905937 1383625 provision.go:87] duration metric: took 302.383127ms to configureAuth
	I1217 12:17:55.905977 1383625 buildroot.go:189] setting minikube options for container-runtime
	I1217 12:17:55.906246 1383625 config.go:182] Loaded profile config "no-preload-837348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 12:17:55.909788 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.910342 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:55.910395 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:55.910622 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:55.910886 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:55.910910 1383625 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 12:17:56.158558 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 12:17:56.158589 1383625 machine.go:97] duration metric: took 931.098693ms to provisionDockerMachine
	I1217 12:17:56.158604 1383625 client.go:176] duration metric: took 20.400731835s to LocalClient.Create
	I1217 12:17:56.158653 1383625 start.go:167] duration metric: took 20.400806334s to libmachine.API.Create "no-preload-837348"
	I1217 12:17:56.158668 1383625 start.go:293] postStartSetup for "no-preload-837348" (driver="kvm2")
	I1217 12:17:56.158686 1383625 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:17:56.158765 1383625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:17:56.161556 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.162017 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.162057 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.162247 1383625 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/id_rsa Username:docker}
	I1217 12:17:56.245367 1383625 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:17:56.250646 1383625 info.go:137] Remote host: Buildroot 2025.02
	I1217 12:17:56.250677 1383625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/addons for local assets ...
	I1217 12:17:56.250763 1383625 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/files for local assets ...
	I1217 12:17:56.250850 1383625 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem -> 13499072.pem in /etc/ssl/certs
	I1217 12:17:56.250969 1383625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:17:56.262524 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:17:56.303233 1383625 start.go:296] duration metric: took 144.54751ms for postStartSetup
	I1217 12:17:56.306769 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.307306 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.307334 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.307632 1383625 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/config.json ...
	I1217 12:17:56.307839 1383625 start.go:128] duration metric: took 20.552189886s to createHost
	I1217 12:17:56.310148 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.310492 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.310534 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.310749 1383625 main.go:143] libmachine: Using SSH client type: native
	I1217 12:17:56.311069 1383625 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.50.4 22 <nil> <nil>}
	I1217 12:17:56.311091 1383625 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 12:17:56.433653 1383625 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765973876.396441235
	
	I1217 12:17:56.433684 1383625 fix.go:216] guest clock: 1765973876.396441235
	I1217 12:17:56.433693 1383625 fix.go:229] Guest: 2025-12-17 12:17:56.396441235 +0000 UTC Remote: 2025-12-17 12:17:56.307853476 +0000 UTC m=+27.525191438 (delta=88.587759ms)
	I1217 12:17:56.433713 1383625 fix.go:200] guest clock delta is within tolerance: 88.587759ms
	I1217 12:17:56.433720 1383625 start.go:83] releasing machines lock for "no-preload-837348", held for 20.678250148s
	I1217 12:17:56.437692 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.438215 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.438248 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.438468 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:17:56.438511 1383625 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:17:56.438522 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:17:56.438557 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:17:56.438585 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:17:56.438622 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:17:56.438694 1383625 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:17:56.438791 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:17:56.441557 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.441884 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:56.441910 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:56.442151 1383625 sshutil.go:53] new ssh client: &{IP:192.168.50.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/no-preload-837348/id_rsa Username:docker}
	I1217 12:17:56.549655 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:17:56.585688 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:17:56.616212 1383625 ssh_runner.go:195] Run: openssl version
	I1217 12:17:56.623570 1383625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:17:56.635584 1383625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:17:56.647516 1383625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:17:56.653034 1383625 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:17:56.653111 1383625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:17:56.661252 1383625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:17:56.675333 1383625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/13499072.pem /etc/ssl/certs/3ec20f2e.0
	I1217 12:17:56.689089 1383625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:56.700877 1383625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:17:56.713067 1383625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:56.719161 1383625 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:56.719238 1383625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:17:56.726999 1383625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:17:56.739650 1383625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 12:17:56.753270 1383625 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:17:56.765209 1383625 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:17:56.779016 1383625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:17:56.784208 1383625 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:17:56.784304 1383625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:17:56.791706 1383625 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:17:56.805815 1383625 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1349907.pem /etc/ssl/certs/51391683.0
	I1217 12:17:56.818101 1383625 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 12:17:56.824120 1383625 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 12:17:56.830076 1383625 ssh_runner.go:195] Run: cat /version.json
	I1217 12:17:56.830156 1383625 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:17:56.843122 1383625 ssh_runner.go:195] Run: systemctl --version
	I1217 12:17:56.868225 1383625 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 12:17:57.034012 1383625 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:17:57.041583 1383625 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:17:57.041676 1383625 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:17:57.064507 1383625 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 12:17:57.064541 1383625 start.go:496] detecting cgroup driver to use...
	I1217 12:17:57.064634 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 12:17:57.087195 1383625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 12:17:57.110678 1383625 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:17:57.110761 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:17:57.133960 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:17:57.153463 1383625 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:17:57.324298 1383625 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:17:57.560005 1383625 docker.go:234] disabling docker service ...
	I1217 12:17:57.560092 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:17:57.582076 1383625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:17:57.597492 1383625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:17:57.760621 1383625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:17:57.914460 1383625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:17:57.931574 1383625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:17:57.958173 1383625 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 12:17:57.958258 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:57.971481 1383625 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 12:17:57.971552 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:57.984408 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:57.999140 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:58.012639 1383625 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:17:58.025964 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:58.039157 1383625 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:58.062671 1383625 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:17:58.076162 1383625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:17:58.087115 1383625 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 12:17:58.087195 1383625 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 12:17:58.110093 1383625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:17:58.125048 1383625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:17:58.265056 1383625 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 12:17:58.380571 1383625 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 12:17:58.380672 1383625 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 12:17:58.388236 1383625 start.go:564] Will wait 60s for crictl version
	I1217 12:17:58.388318 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.393087 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 12:17:58.432536 1383625 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 12:17:58.432622 1383625 ssh_runner.go:195] Run: crio --version
	I1217 12:17:58.466325 1383625 ssh_runner.go:195] Run: crio --version
	I1217 12:17:58.505115 1383625 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.29.1 ...
	I1217 12:17:58.509551 1383625 main.go:143] libmachine: domain no-preload-837348 has defined MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:58.510106 1383625 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3f:19:62", ip: ""} in network mk-no-preload-837348: {Iface:virbr2 ExpiryTime:2025-12-17 13:17:51 +0000 UTC Type:0 Mac:52:54:00:3f:19:62 Iaid: IPaddr:192.168.50.4 Prefix:24 Hostname:no-preload-837348 Clientid:01:52:54:00:3f:19:62}
	I1217 12:17:58.510140 1383625 main.go:143] libmachine: domain no-preload-837348 has defined IP address 192.168.50.4 and MAC address 52:54:00:3f:19:62 in network mk-no-preload-837348
	I1217 12:17:58.510393 1383625 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1217 12:17:58.515384 1383625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 12:17:58.530703 1383625 kubeadm.go:884] updating cluster {Name:no-preload-837348 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:no-preload-837348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.4 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:17:58.530851 1383625 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 12:17:58.530903 1383625 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:17:58.564874 1383625 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1217 12:17:58.564911 1383625 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 12:17:58.565046 1383625 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:17:58.565069 1383625 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.565081 1383625 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.565100 1383625 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.565110 1383625 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.565052 1383625 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 12:17:58.565048 1383625 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:58.565048 1383625 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.566852 1383625 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 12:17:58.566899 1383625 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.566942 1383625 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:17:58.566943 1383625 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:58.567046 1383625 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.566951 1383625 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.567035 1383625 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.567325 1383625 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.710818 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.714135 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.718171 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.732132 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.735479 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.754251 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1217 12:17:58.759821 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:57.249436 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:17:57.250106 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": dial tcp 192.168.61.103:8443: connect: connection refused
	I1217 12:17:57.250178 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:17:57.250237 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:17:57.297819 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:17:57.297846 1382780 cri.go:89] found id: ""
	I1217 12:17:57.297858 1382780 logs.go:282] 1 containers: [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f]
	I1217 12:17:57.297926 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.303476 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:17:57.303560 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:17:57.353597 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:17:57.353625 1382780 cri.go:89] found id: ""
	I1217 12:17:57.353635 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:17:57.353700 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.358417 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:17:57.358509 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:17:57.402926 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:17:57.402949 1382780 cri.go:89] found id: ""
	I1217 12:17:57.402958 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:17:57.403039 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.407180 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:17:57.407240 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:17:57.445696 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:17:57.445723 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:17:57.445729 1382780 cri.go:89] found id: ""
	I1217 12:17:57.445740 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:17:57.445814 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.450585 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.454551 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:17:57.454632 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:17:57.490470 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:17:57.490499 1382780 cri.go:89] found id: ""
	I1217 12:17:57.490512 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:17:57.490581 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.495781 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:17:57.495866 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:17:57.536883 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:17:57.536912 1382780 cri.go:89] found id: ""
	I1217 12:17:57.536924 1382780 logs.go:282] 1 containers: [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:17:57.537012 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.543212 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:17:57.543292 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:17:57.581413 1382780 cri.go:89] found id: ""
	I1217 12:17:57.581441 1382780 logs.go:282] 0 containers: []
	W1217 12:17:57.581450 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:17:57.581456 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:17:57.581529 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:17:57.617389 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:17:57.617417 1382780 cri.go:89] found id: ""
	I1217 12:17:57.617427 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:17:57.617482 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:17:57.621595 1382780 logs.go:123] Gathering logs for kube-scheduler [93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29] ...
	I1217 12:17:57.621619 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:17:57.675762 1382780 logs.go:123] Gathering logs for kube-proxy [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8] ...
	I1217 12:17:57.675801 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:17:57.712549 1382780 logs.go:123] Gathering logs for storage-provisioner [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90] ...
	I1217 12:17:57.712586 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:17:57.747741 1382780 logs.go:123] Gathering logs for CRI-O ...
	I1217 12:17:57.747781 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 12:17:58.081830 1382780 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:17:58.081880 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:17:58.160578 1382780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:17:58.160601 1382780 logs.go:123] Gathering logs for kube-apiserver [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f] ...
	I1217 12:17:58.160615 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:17:58.204179 1382780 logs.go:123] Gathering logs for etcd [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9] ...
	I1217 12:17:58.204222 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:17:58.254156 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:17:58.254197 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:17:58.291751 1382780 logs.go:123] Gathering logs for kube-scheduler [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73] ...
	I1217 12:17:58.291793 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:17:58.361184 1382780 logs.go:123] Gathering logs for kube-controller-manager [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499] ...
	I1217 12:17:58.361222 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:17:58.405383 1382780 logs.go:123] Gathering logs for container status ...
	I1217 12:17:58.405419 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:17:58.447141 1382780 logs.go:123] Gathering logs for kubelet ...
	I1217 12:17:58.447182 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:17:58.551232 1382780 logs.go:123] Gathering logs for dmesg ...
	I1217 12:17:58.551275 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:17:55.710461 1383348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 12:17:55.710544 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-757245 minikube.k8s.io/updated_at=2025_12_17T12_17_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e7c291d147a7a4c759554efbd6d659a1a65fa869 minikube.k8s.io/name=old-k8s-version-757245 minikube.k8s.io/primary=true
	I1217 12:17:55.710547 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:55.829663 1383348 ops.go:34] apiserver oom_adj: -16
	I1217 12:17:55.980556 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:56.481124 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:56.981481 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:57.481367 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:57.981397 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:58.481229 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:58.981625 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:59.481232 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:17:59.980701 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:00.480773 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:03.155344 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 12:18:03.155378 1383836 machine.go:97] duration metric: took 6.71700129s to provisionDockerMachine
	I1217 12:18:03.155393 1383836 start.go:293] postStartSetup for "pause-137189" (driver="kvm2")
	I1217 12:18:03.155403 1383836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 12:18:03.155614 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 12:18:03.159779 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.160276 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.160325 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.160541 1383836 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/pause-137189/id_rsa Username:docker}
	I1217 12:18:03.248230 1383836 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 12:18:03.253798 1383836 info.go:137] Remote host: Buildroot 2025.02
	I1217 12:18:03.253824 1383836 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/addons for local assets ...
	I1217 12:18:03.253894 1383836 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-1345916/.minikube/files for local assets ...
	I1217 12:18:03.253967 1383836 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem -> 13499072.pem in /etc/ssl/certs
	I1217 12:18:03.254079 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 12:18:03.268822 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:17:58.874437 1383625 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614" in container runtime
	I1217 12:17:58.874486 1383625 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.874539 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.874564 1383625 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1217 12:17:58.874617 1383625 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.874680 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.920462 1383625 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a" in container runtime
	I1217 12:17:58.920518 1383625 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.920570 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.943997 1383625 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1217 12:17:58.944029 1383625 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce" in container runtime
	I1217 12:17:58.944048 1383625 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.944048 1383625 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.944101 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.944101 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.950708 1383625 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1217 12:17:58.950770 1383625 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc" in container runtime
	I1217 12:17:58.950807 1383625 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:58.950867 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.950778 1383625 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 12:17:58.950884 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:58.950917 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:17:58.950975 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:58.951000 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:58.959240 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:58.959309 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:58.965102 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:59.053411 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:59.053437 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:59.053501 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 12:17:59.053531 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:59.073293 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:59.084247 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:59.084247 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:59.149442 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 12:17:59.176827 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 12:17:59.188951 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 12:17:59.188989 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 12:17:59.218135 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 12:17:59.218157 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 12:17:59.218196 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 12:17:59.269201 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 12:17:59.269346 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 12:17:59.269351 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 12:17:59.269443 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1217 12:17:59.323023 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 12:17:59.323038 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 12:17:59.323086 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1217 12:17:59.323150 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 12:17:59.323184 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1217 12:17:59.326311 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 12:17:59.326325 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 12:17:59.326371 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1217 12:17:59.326392 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (23144960 bytes)
	I1217 12:17:59.326409 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 12:17:59.326416 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1217 12:17:59.326411 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 12:17:59.326441 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1217 12:17:59.381928 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1217 12:17:59.381971 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1217 12:17:59.381932 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1217 12:17:59.382016 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1217 12:17:59.382049 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (25791488 bytes)
	I1217 12:17:59.382096 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 12:17:59.382094 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1217 12:17:59.382137 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (27697152 bytes)
	I1217 12:17:59.382056 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1217 12:17:59.382203 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (17248256 bytes)
	I1217 12:17:59.520670 1383625 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 12:17:59.520723 1383625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1217 12:17:59.684132 1383625 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 12:17:59.684212 1383625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1217 12:17:59.770532 1383625 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:00.463462 1383625 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1217 12:18:00.463525 1383625 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:00.463593 1383625 ssh_runner.go:195] Run: which crictl
	I1217 12:18:00.463650 1383625 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1217 12:18:00.492057 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:00.619920 1383625 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 12:18:00.620019 1383625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 12:18:00.629754 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:01.071626 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:18:01.072470 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": dial tcp 192.168.61.103:8443: connect: connection refused
	I1217 12:18:01.072529 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:18:01.072582 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:18:01.111787 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:01.111817 1382780 cri.go:89] found id: ""
	I1217 12:18:01.111830 1382780 logs.go:282] 1 containers: [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f]
	I1217 12:18:01.111901 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.116131 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:18:01.116218 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:18:01.157535 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:01.157576 1382780 cri.go:89] found id: ""
	I1217 12:18:01.157588 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:18:01.157664 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.161897 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:18:01.161995 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:18:01.198853 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:01.198887 1382780 cri.go:89] found id: ""
	I1217 12:18:01.198902 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:18:01.199005 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.203667 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:18:01.203752 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:18:01.248264 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:01.248324 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:01.248336 1382780 cri.go:89] found id: ""
	I1217 12:18:01.248349 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:18:01.248445 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.253768 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.258732 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:18:01.258814 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:18:01.302721 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:01.302751 1382780 cri.go:89] found id: ""
	I1217 12:18:01.302764 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:18:01.302837 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.308464 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:18:01.308566 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:18:01.344888 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:01.344938 1382780 cri.go:89] found id: ""
	I1217 12:18:01.344960 1382780 logs.go:282] 1 containers: [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:18:01.345055 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.349136 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:18:01.349219 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:18:01.383714 1382780 cri.go:89] found id: ""
	I1217 12:18:01.383759 1382780 logs.go:282] 0 containers: []
	W1217 12:18:01.383774 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:18:01.383785 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:18:01.383881 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:18:01.419661 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:01.419698 1382780 cri.go:89] found id: ""
	I1217 12:18:01.419710 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:18:01.419786 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:01.424730 1382780 logs.go:123] Gathering logs for kubelet ...
	I1217 12:18:01.424763 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:18:01.527974 1382780 logs.go:123] Gathering logs for dmesg ...
	I1217 12:18:01.528032 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:18:01.548976 1382780 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:18:01.549049 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:18:01.643790 1382780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:18:01.643828 1382780 logs.go:123] Gathering logs for etcd [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9] ...
	I1217 12:18:01.643847 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:01.705381 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:18:01.705437 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:01.746724 1382780 logs.go:123] Gathering logs for kube-scheduler [93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29] ...
	I1217 12:18:01.746762 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:01.802110 1382780 logs.go:123] Gathering logs for kube-proxy [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8] ...
	I1217 12:18:01.802162 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:01.839865 1382780 logs.go:123] Gathering logs for CRI-O ...
	I1217 12:18:01.839913 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 12:18:02.176218 1382780 logs.go:123] Gathering logs for kube-apiserver [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f] ...
	I1217 12:18:02.176262 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:02.221839 1382780 logs.go:123] Gathering logs for kube-scheduler [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73] ...
	I1217 12:18:02.221888 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:02.304546 1382780 logs.go:123] Gathering logs for kube-controller-manager [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499] ...
	I1217 12:18:02.304603 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:02.359576 1382780 logs.go:123] Gathering logs for storage-provisioner [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90] ...
	I1217 12:18:02.359612 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:02.412316 1382780 logs.go:123] Gathering logs for container status ...
	I1217 12:18:02.412358 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:18:00.981322 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:01.481235 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:01.980738 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:02.481090 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:02.980673 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:03.481220 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:03.980726 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:04.480746 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:04.981699 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:05.480802 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:03.302774 1383836 start.go:296] duration metric: took 147.365771ms for postStartSetup
	I1217 12:18:03.302823 1383836 fix.go:56] duration metric: took 6.868936046s for fixHost
	I1217 12:18:03.306746 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.307312 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.307349 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.307620 1383836 main.go:143] libmachine: Using SSH client type: native
	I1217 12:18:03.307872 1383836 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.45 22 <nil> <nil>}
	I1217 12:18:03.307886 1383836 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 12:18:03.416832 1383836 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765973883.412776808
	
	I1217 12:18:03.416864 1383836 fix.go:216] guest clock: 1765973883.412776808
	I1217 12:18:03.416874 1383836 fix.go:229] Guest: 2025-12-17 12:18:03.412776808 +0000 UTC Remote: 2025-12-17 12:18:03.302829513 +0000 UTC m=+25.082055048 (delta=109.947295ms)
	I1217 12:18:03.416896 1383836 fix.go:200] guest clock delta is within tolerance: 109.947295ms
	I1217 12:18:03.416903 1383836 start.go:83] releasing machines lock for "pause-137189", held for 6.983049517s
	I1217 12:18:03.420764 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.421324 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.421377 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.421651 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:18:03.421709 1383836 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:18:03.421722 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:18:03.421754 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:18:03.421787 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:18:03.421831 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:18:03.421903 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:18:03.422007 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:18:03.424970 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.425514 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:03.425547 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:03.425743 1383836 sshutil.go:53] new ssh client: &{IP:192.168.39.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/pause-137189/id_rsa Username:docker}
	I1217 12:18:03.530365 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:18:03.566487 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:18:03.605108 1383836 ssh_runner.go:195] Run: openssl version
	I1217 12:18:03.611717 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.624257 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:18:03.641103 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.646757 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.646831 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:18:03.654564 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:18:03.670747 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.688345 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:18:03.704482 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.710124 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.710213 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:18:03.717800 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:18:03.731082 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.749306 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:18:03.761700 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.767349 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.767419 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:03.774481 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:18:03.790903 1383836 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1217 12:18:03.796616 1383836 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1217 12:18:03.804031 1383836 ssh_runner.go:195] Run: cat /version.json
	I1217 12:18:03.804215 1383836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 12:18:03.839911 1383836 ssh_runner.go:195] Run: systemctl --version
	I1217 12:18:03.848178 1383836 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 12:18:03.999577 1383836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 12:18:04.009162 1383836 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 12:18:04.009277 1383836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 12:18:04.022017 1383836 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 12:18:04.022048 1383836 start.go:496] detecting cgroup driver to use...
	I1217 12:18:04.022156 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 12:18:04.048238 1383836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 12:18:04.070644 1383836 docker.go:218] disabling cri-docker service (if available) ...
	I1217 12:18:04.070717 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 12:18:04.092799 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 12:18:04.109622 1383836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 12:18:04.307257 1383836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 12:18:04.492787 1383836 docker.go:234] disabling docker service ...
	I1217 12:18:04.492894 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 12:18:04.524961 1383836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 12:18:04.543840 1383836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 12:18:04.726189 1383836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 12:18:04.894624 1383836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 12:18:04.910539 1383836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 12:18:04.934048 1383836 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 12:18:04.934128 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.947224 1383836 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 12:18:04.947324 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.960156 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.974307 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:04.992701 1383836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 12:18:05.012211 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:05.030236 1383836 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:05.048278 1383836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 12:18:05.066840 1383836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 12:18:05.082352 1383836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 12:18:05.103035 1383836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:18:05.654582 1383836 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 12:18:06.073917 1383836 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 12:18:06.074017 1383836 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 12:18:06.082385 1383836 start.go:564] Will wait 60s for crictl version
	I1217 12:18:06.082505 1383836 ssh_runner.go:195] Run: which crictl
	I1217 12:18:06.087517 1383836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 12:18:06.130064 1383836 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 12:18:06.130178 1383836 ssh_runner.go:195] Run: crio --version
	I1217 12:18:06.175515 1383836 ssh_runner.go:195] Run: crio --version
	I1217 12:18:06.429728 1383836 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.29.1 ...
	I1217 12:18:05.980897 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:06.480894 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:06.981232 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:07.480653 1383348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 12:18:07.658671 1383348 kubeadm.go:1114] duration metric: took 11.948200598s to wait for elevateKubeSystemPrivileges
	I1217 12:18:07.658727 1383348 kubeadm.go:403] duration metric: took 24.530289091s to StartCluster
	I1217 12:18:07.658764 1383348 settings.go:142] acquiring lock: {Name:mkab196c8ac23f97b54763cecaa5ac5ac8f7dd0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:18:07.658892 1383348 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 12:18:07.660641 1383348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/kubeconfig: {Name:mkf9f7ccd4382c7fd64f6772f4fae6c99a70cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:18:07.660994 1383348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 12:18:07.661014 1383348 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.245 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 12:18:07.661167 1383348 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 12:18:07.661271 1383348 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-757245"
	I1217 12:18:07.661293 1383348 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-757245"
	I1217 12:18:07.661315 1383348 config.go:182] Loaded profile config "old-k8s-version-757245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 12:18:07.661327 1383348 host.go:66] Checking if "old-k8s-version-757245" exists ...
	I1217 12:18:07.661375 1383348 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-757245"
	I1217 12:18:07.661394 1383348 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-757245"
	I1217 12:18:07.662598 1383348 out.go:179] * Verifying Kubernetes components...
	I1217 12:18:07.664036 1383348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:18:07.666087 1383348 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:06.435189 1383836 main.go:143] libmachine: domain pause-137189 has defined MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:06.435820 1383836 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ad:46:ee", ip: ""} in network mk-pause-137189: {Iface:virbr1 ExpiryTime:2025-12-17 13:16:27 +0000 UTC Type:0 Mac:52:54:00:ad:46:ee Iaid: IPaddr:192.168.39.45 Prefix:24 Hostname:pause-137189 Clientid:01:52:54:00:ad:46:ee}
	I1217 12:18:06.435856 1383836 main.go:143] libmachine: domain pause-137189 has defined IP address 192.168.39.45 and MAC address 52:54:00:ad:46:ee in network mk-pause-137189
	I1217 12:18:06.436177 1383836 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 12:18:06.446076 1383836 kubeadm.go:884] updating cluster {Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3
ClusterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 12:18:06.446322 1383836 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 12:18:06.446414 1383836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:18:06.573055 1383836 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 12:18:06.573084 1383836 crio.go:433] Images already preloaded, skipping extraction
	I1217 12:18:06.573147 1383836 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 12:18:06.693571 1383836 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 12:18:06.693599 1383836 cache_images.go:86] Images are preloaded, skipping loading
	I1217 12:18:06.693609 1383836 kubeadm.go:935] updating node { 192.168.39.45 8443 v1.34.3 crio true true} ...
	I1217 12:18:06.693749 1383836 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-137189 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.45
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 12:18:06.693851 1383836 ssh_runner.go:195] Run: crio config
	I1217 12:18:06.770459 1383836 cni.go:84] Creating CNI manager for ""
	I1217 12:18:06.770543 1383836 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 12:18:06.770571 1383836 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 12:18:06.770601 1383836 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.45 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-137189 NodeName:pause-137189 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.45"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.45 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 12:18:06.770804 1383836 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.45
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-137189"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.45"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.45"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 12:18:06.770892 1383836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 12:18:06.795556 1383836 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 12:18:06.795661 1383836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 12:18:06.824438 1383836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1217 12:18:06.872321 1383836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 12:18:06.918865 1383836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1217 12:18:06.972804 1383836 ssh_runner.go:195] Run: grep 192.168.39.45	control-plane.minikube.internal$ /etc/hosts
	I1217 12:18:06.988931 1383836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 12:18:07.345330 1383836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:18:07.376414 1383836 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189 for IP: 192.168.39.45
	I1217 12:18:07.376445 1383836 certs.go:195] generating shared ca certs ...
	I1217 12:18:07.376468 1383836 certs.go:227] acquiring lock for ca certs: {Name:mk7dff4294abcbe4af041891799d61c459798c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 12:18:07.376687 1383836 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key
	I1217 12:18:07.376766 1383836 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key
	I1217 12:18:07.376780 1383836 certs.go:257] generating profile certs ...
	I1217 12:18:07.376898 1383836 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/client.key
	I1217 12:18:07.376994 1383836 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/apiserver.key.bd5945ce
	I1217 12:18:07.377059 1383836 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/proxy-client.key
	I1217 12:18:07.377235 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem (1338 bytes)
	W1217 12:18:07.377290 1383836 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907_empty.pem, impossibly tiny 0 bytes
	I1217 12:18:07.377300 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 12:18:07.377343 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/ca.pem (1082 bytes)
	I1217 12:18:07.377382 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/cert.pem (1123 bytes)
	I1217 12:18:07.377410 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/key.pem (1675 bytes)
	I1217 12:18:07.377467 1383836 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem (1708 bytes)
	I1217 12:18:07.378515 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 12:18:07.493330 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 12:18:07.574427 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 12:18:07.652304 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1217 12:18:07.713042 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 12:18:07.748572 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 12:18:07.821136 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 12:18:07.869252 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/pause-137189/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 12:18:07.927195 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/ssl/certs/13499072.pem --> /usr/share/ca-certificates/13499072.pem (1708 bytes)
	I1217 12:18:08.036351 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 12:18:08.131745 1383836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-1345916/.minikube/certs/1349907.pem --> /usr/share/ca-certificates/1349907.pem (1338 bytes)
	I1217 12:18:08.245162 1383836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 12:18:04.114238 1383625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (3.494184473s)
	I1217 12:18:04.114280 1383625 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1217 12:18:04.114317 1383625 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1217 12:18:04.114315 1383625 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.484520972s)
	I1217 12:18:04.114363 1383625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1217 12:18:04.114416 1383625 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 12:18:06.591626 1383625 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (2.477220732s)
	I1217 12:18:06.591675 1383625 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1217 12:18:06.591707 1383625 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1217 12:18:06.591767 1383625 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1217 12:18:06.591877 1383625 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.477446649s)
	I1217 12:18:06.591916 1383625 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 12:18:06.592035 1383625 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 12:18:04.964299 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:18:04.965062 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": dial tcp 192.168.61.103:8443: connect: connection refused
	I1217 12:18:04.965130 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:18:04.965204 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:18:05.016906 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:05.016954 1382780 cri.go:89] found id: ""
	I1217 12:18:05.016966 1382780 logs.go:282] 1 containers: [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f]
	I1217 12:18:05.017075 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.023591 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:18:05.023705 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:18:05.067785 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:05.067809 1382780 cri.go:89] found id: ""
	I1217 12:18:05.067820 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:18:05.067896 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.073889 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:18:05.073968 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:18:05.123697 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:05.123726 1382780 cri.go:89] found id: ""
	I1217 12:18:05.123738 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:18:05.123801 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.129487 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:18:05.129639 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:18:05.176993 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:05.177093 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:05.177106 1382780 cri.go:89] found id: ""
	I1217 12:18:05.177116 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:18:05.177276 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.182303 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.186955 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:18:05.187054 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:18:05.229968 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:05.230016 1382780 cri.go:89] found id: ""
	I1217 12:18:05.230029 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:18:05.230112 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.235045 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:18:05.235133 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:18:05.282958 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:05.282998 1382780 cri.go:89] found id: ""
	I1217 12:18:05.283008 1382780 logs.go:282] 1 containers: [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:18:05.283077 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.287873 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:18:05.288001 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:18:05.339719 1382780 cri.go:89] found id: ""
	I1217 12:18:05.339762 1382780 logs.go:282] 0 containers: []
	W1217 12:18:05.339774 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:18:05.339783 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:18:05.339891 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:18:05.387297 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:05.387329 1382780 cri.go:89] found id: ""
	I1217 12:18:05.387341 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:18:05.387432 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:05.392630 1382780 logs.go:123] Gathering logs for kubelet ...
	I1217 12:18:05.392668 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 12:18:05.541892 1382780 logs.go:123] Gathering logs for etcd [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9] ...
	I1217 12:18:05.541950 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:05.636280 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:18:05.636337 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:05.679509 1382780 logs.go:123] Gathering logs for kube-scheduler [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73] ...
	I1217 12:18:05.679552 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:05.770490 1382780 logs.go:123] Gathering logs for kube-scheduler [93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29] ...
	I1217 12:18:05.770566 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:05.835923 1382780 logs.go:123] Gathering logs for kube-controller-manager [1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499] ...
	I1217 12:18:05.835969 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:05.888348 1382780 logs.go:123] Gathering logs for CRI-O ...
	I1217 12:18:05.888396 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 12:18:06.331090 1382780 logs.go:123] Gathering logs for dmesg ...
	I1217 12:18:06.331148 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 12:18:06.346192 1382780 logs.go:123] Gathering logs for describe nodes ...
	I1217 12:18:06.346232 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 12:18:06.413958 1382780 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 12:18:06.414009 1382780 logs.go:123] Gathering logs for kube-apiserver [0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f] ...
	I1217 12:18:06.414029 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:06.490040 1382780 logs.go:123] Gathering logs for kube-proxy [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8] ...
	I1217 12:18:06.490098 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:06.553777 1382780 logs.go:123] Gathering logs for storage-provisioner [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90] ...
	I1217 12:18:06.553831 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:06.620455 1382780 logs.go:123] Gathering logs for container status ...
	I1217 12:18:06.620497 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 12:18:09.193061 1382780 api_server.go:253] Checking apiserver healthz at https://192.168.61.103:8443/healthz ...
	I1217 12:18:09.193892 1382780 api_server.go:269] stopped: https://192.168.61.103:8443/healthz: Get "https://192.168.61.103:8443/healthz": dial tcp 192.168.61.103:8443: connect: connection refused
	I1217 12:18:09.193967 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 12:18:09.194052 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 12:18:09.282268 1382780 cri.go:89] found id: "115868dfcf12d3df61d7ea2758ac63af46c98b1ceabde72a6d2221da0c4131f0"
	I1217 12:18:09.282297 1382780 cri.go:89] found id: "0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f"
	I1217 12:18:09.282304 1382780 cri.go:89] found id: ""
	I1217 12:18:09.282375 1382780 logs.go:282] 2 containers: [115868dfcf12d3df61d7ea2758ac63af46c98b1ceabde72a6d2221da0c4131f0 0a31f5df1267cd3328bbfdd3fedc65bd87628e2df6285e9129581cc8ada8712f]
	I1217 12:18:09.282454 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.288967 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.297409 1382780 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 12:18:09.297511 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 12:18:09.391374 1382780 cri.go:89] found id: "8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9"
	I1217 12:18:09.391411 1382780 cri.go:89] found id: ""
	I1217 12:18:09.391424 1382780 logs.go:282] 1 containers: [8d4257abcc02e0b326b22e4f0f7eef174eb53c8524f3be2d1c2ec6cacda86ad9]
	I1217 12:18:09.391500 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.398369 1382780 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 12:18:09.398460 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 12:18:09.467444 1382780 cri.go:89] found id: "8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:09.467477 1382780 cri.go:89] found id: ""
	I1217 12:18:09.467489 1382780 logs.go:282] 1 containers: [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe]
	I1217 12:18:09.467562 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.474936 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 12:18:09.475056 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 12:18:09.547478 1382780 cri.go:89] found id: "61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73"
	I1217 12:18:09.547637 1382780 cri.go:89] found id: "93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29"
	I1217 12:18:09.547652 1382780 cri.go:89] found id: ""
	I1217 12:18:09.547664 1382780 logs.go:282] 2 containers: [61a3362c1125eed278f3a1e277fd55186cb5a9a6d46abe44fc3affa8e15efd73 93a8117aeb3861704e0f4a968e69ad6099d1a10ef4200c74218e3b4ee581cd29]
	I1217 12:18:09.547782 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.554347 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.560316 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 12:18:09.560435 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 12:18:09.629195 1382780 cri.go:89] found id: "72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8"
	I1217 12:18:09.629229 1382780 cri.go:89] found id: ""
	I1217 12:18:09.629241 1382780 logs.go:282] 1 containers: [72e9b52477eab2e6c65c9905ad9c3fce8f0d63f48355e328d6ffc8d534d2e8e8]
	I1217 12:18:09.629311 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.635327 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 12:18:09.635444 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 12:18:09.693332 1382780 cri.go:89] found id: "b854e6ee21b41e2e2c8cf047e543ca62d41eae440ebfacbd994c3da06e4d6cdb"
	I1217 12:18:09.693373 1382780 cri.go:89] found id: "1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499"
	I1217 12:18:09.693382 1382780 cri.go:89] found id: ""
	I1217 12:18:09.693396 1382780 logs.go:282] 2 containers: [b854e6ee21b41e2e2c8cf047e543ca62d41eae440ebfacbd994c3da06e4d6cdb 1cad84e091a7b0fb33fe96b3f12d106b6d2f4c40f35592917eb724f6c64ad499]
	I1217 12:18:09.693474 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.699407 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.705607 1382780 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 12:18:09.705688 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 12:18:09.782637 1382780 cri.go:89] found id: ""
	I1217 12:18:09.782672 1382780 logs.go:282] 0 containers: []
	W1217 12:18:09.782683 1382780 logs.go:284] No container was found matching "kindnet"
	I1217 12:18:09.782691 1382780 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 12:18:09.782755 1382780 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 12:18:09.855610 1382780 cri.go:89] found id: "2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90"
	I1217 12:18:09.855640 1382780 cri.go:89] found id: ""
	I1217 12:18:09.855665 1382780 logs.go:282] 1 containers: [2cd04f24b5dbba9f39b2b052a9c3921b46dd25474c6d5f96e349f38bdb54ff90]
	I1217 12:18:09.855742 1382780 ssh_runner.go:195] Run: which crictl
	I1217 12:18:09.860699 1382780 logs.go:123] Gathering logs for coredns [8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe] ...
	I1217 12:18:09.860741 1382780 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a417acf035ecb82cedc2d150fbf5008d5f6760b294a462f11de65582b493fbe"
	I1217 12:18:07.666742 1383348 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-757245"
	I1217 12:18:07.666788 1383348 host.go:66] Checking if "old-k8s-version-757245" exists ...
	I1217 12:18:07.667687 1383348 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:18:07.667708 1383348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 12:18:07.669927 1383348 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 12:18:07.669945 1383348 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 12:18:07.674265 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:18:07.675045 1383348 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:06:0d", ip: ""} in network mk-old-k8s-version-757245: {Iface:virbr5 ExpiryTime:2025-12-17 13:17:30 +0000 UTC Type:0 Mac:52:54:00:52:06:0d Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-757245 Clientid:01:52:54:00:52:06:0d}
	I1217 12:18:07.675081 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined IP address 192.168.83.245 and MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:18:07.675462 1383348 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/old-k8s-version-757245/id_rsa Username:docker}
	I1217 12:18:07.677587 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:18:07.678238 1383348 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:06:0d", ip: ""} in network mk-old-k8s-version-757245: {Iface:virbr5 ExpiryTime:2025-12-17 13:17:30 +0000 UTC Type:0 Mac:52:54:00:52:06:0d Iaid: IPaddr:192.168.83.245 Prefix:24 Hostname:old-k8s-version-757245 Clientid:01:52:54:00:52:06:0d}
	I1217 12:18:07.678293 1383348 main.go:143] libmachine: domain old-k8s-version-757245 has defined IP address 192.168.83.245 and MAC address 52:54:00:52:06:0d in network mk-old-k8s-version-757245
	I1217 12:18:07.678837 1383348 sshutil.go:53] new ssh client: &{IP:192.168.83.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/old-k8s-version-757245/id_rsa Username:docker}
	I1217 12:18:07.998245 1383348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 12:18:08.166998 1383348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 12:18:08.234779 1383348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 12:18:08.547033 1383348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 12:18:10.287454 1383348 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.83.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.289158633s)
	I1217 12:18:10.287496 1383348 start.go:1013] {"host.minikube.internal": 192.168.83.1} host record injected into CoreDNS's ConfigMap
	I1217 12:18:10.287523 1383348 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.120489043s)
	I1217 12:18:10.288832 1383348 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-757245" to be "Ready" ...
	I1217 12:18:10.304323 1383348 node_ready.go:49] node "old-k8s-version-757245" is "Ready"
	I1217 12:18:10.304371 1383348 node_ready.go:38] duration metric: took 15.506469ms for node "old-k8s-version-757245" to be "Ready" ...
	I1217 12:18:10.304394 1383348 api_server.go:52] waiting for apiserver process to appear ...
	I1217 12:18:10.304459 1383348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 12:18:10.606060 1383348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.058978898s)
	I1217 12:18:10.606137 1383348 api_server.go:72] duration metric: took 2.945074078s to wait for apiserver process to appear ...
	I1217 12:18:10.606160 1383348 api_server.go:88] waiting for apiserver healthz status ...
	I1217 12:18:10.606172 1383348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.371350146s)
	I1217 12:18:10.606184 1383348 api_server.go:253] Checking apiserver healthz at https://192.168.83.245:8443/healthz ...
	I1217 12:18:10.623619 1383348 api_server.go:279] https://192.168.83.245:8443/healthz returned 200:
	ok
	I1217 12:18:10.625457 1383348 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 12:18:10.625868 1383348 api_server.go:141] control plane version: v1.28.0
	I1217 12:18:10.625898 1383348 api_server.go:131] duration metric: took 19.72983ms to wait for apiserver health ...
	I1217 12:18:10.625910 1383348 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 12:18:10.626835 1383348 addons.go:530] duration metric: took 2.965673684s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 12:18:10.634773 1383348 system_pods.go:59] 8 kube-system pods found
	I1217 12:18:10.634809 1383348 system_pods.go:61] "coredns-5dd5756b68-92xws" [a72f93e8-c61d-4063-97df-aff70d878bb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:18:10.634817 1383348 system_pods.go:61] "coredns-5dd5756b68-m495h" [93a4c8ae-8fba-4ef9-addd-fdc5e9351a90] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:18:10.634823 1383348 system_pods.go:61] "etcd-old-k8s-version-757245" [970cfdfb-d063-480d-943a-ad81930ad464] Running
	I1217 12:18:10.634828 1383348 system_pods.go:61] "kube-apiserver-old-k8s-version-757245" [511ffe7e-87bf-48fa-9e58-a02f59d4fda2] Running
	I1217 12:18:10.634832 1383348 system_pods.go:61] "kube-controller-manager-old-k8s-version-757245" [0421248f-481b-4f89-a4fb-6a94a575fc25] Running
	I1217 12:18:10.634835 1383348 system_pods.go:61] "kube-proxy-mctv5" [9b2ead72-2de2-4ad7-82ed-724dfc3461c2] Running
	I1217 12:18:10.634839 1383348 system_pods.go:61] "kube-scheduler-old-k8s-version-757245" [e386f238-5485-4cbb-9564-03614c4207d5] Running
	I1217 12:18:10.634845 1383348 system_pods.go:61] "storage-provisioner" [e3cad041-88a1-4d0e-be11-7072c4e44ddf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:18:10.634853 1383348 system_pods.go:74] duration metric: took 8.936349ms to wait for pod list to return data ...
	I1217 12:18:10.634864 1383348 default_sa.go:34] waiting for default service account to be created ...
	I1217 12:18:10.637498 1383348 default_sa.go:45] found service account: "default"
	I1217 12:18:10.637523 1383348 default_sa.go:55] duration metric: took 2.648264ms for default service account to be created ...
	I1217 12:18:10.637535 1383348 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 12:18:10.641463 1383348 system_pods.go:86] 8 kube-system pods found
	I1217 12:18:10.641499 1383348 system_pods.go:89] "coredns-5dd5756b68-92xws" [a72f93e8-c61d-4063-97df-aff70d878bb3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:18:10.641515 1383348 system_pods.go:89] "coredns-5dd5756b68-m495h" [93a4c8ae-8fba-4ef9-addd-fdc5e9351a90] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 12:18:10.641522 1383348 system_pods.go:89] "etcd-old-k8s-version-757245" [970cfdfb-d063-480d-943a-ad81930ad464] Running
	I1217 12:18:10.641529 1383348 system_pods.go:89] "kube-apiserver-old-k8s-version-757245" [511ffe7e-87bf-48fa-9e58-a02f59d4fda2] Running
	I1217 12:18:10.641535 1383348 system_pods.go:89] "kube-controller-manager-old-k8s-version-757245" [0421248f-481b-4f89-a4fb-6a94a575fc25] Running
	I1217 12:18:10.641549 1383348 system_pods.go:89] "kube-proxy-mctv5" [9b2ead72-2de2-4ad7-82ed-724dfc3461c2] Running
	I1217 12:18:10.641565 1383348 system_pods.go:89] "kube-scheduler-old-k8s-version-757245" [e386f238-5485-4cbb-9564-03614c4207d5] Running
	I1217 12:18:10.641576 1383348 system_pods.go:89] "storage-provisioner" [e3cad041-88a1-4d0e-be11-7072c4e44ddf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 12:18:10.641585 1383348 system_pods.go:126] duration metric: took 4.043826ms to wait for k8s-apps to be running ...
	I1217 12:18:10.641598 1383348 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 12:18:10.641659 1383348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 12:18:10.664683 1383348 system_svc.go:56] duration metric: took 23.073336ms WaitForService to wait for kubelet
	I1217 12:18:10.664719 1383348 kubeadm.go:587] duration metric: took 3.003658588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 12:18:10.664742 1383348 node_conditions.go:102] verifying NodePressure condition ...
	I1217 12:18:10.669436 1383348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 12:18:10.669475 1383348 node_conditions.go:123] node cpu capacity is 2
	I1217 12:18:10.669495 1383348 node_conditions.go:105] duration metric: took 4.746821ms to run NodePressure ...
	I1217 12:18:10.669510 1383348 start.go:242] waiting for startup goroutines ...
	I1217 12:18:08.294033 1383836 ssh_runner.go:195] Run: openssl version
	I1217 12:18:08.311655 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.345003 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/13499072.pem /etc/ssl/certs/13499072.pem
	I1217 12:18:08.386518 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.399088 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 11:25 /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.399191 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13499072.pem
	I1217 12:18:08.416105 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 12:18:08.442880 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.476720 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 12:18:08.501370 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.511800 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 11:15 /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.511899 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 12:18:08.524569 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 12:18:08.553109 1383836 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.572282 1383836 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1349907.pem /etc/ssl/certs/1349907.pem
	I1217 12:18:08.604798 1383836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.616889 1383836 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 11:25 /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.617016 1383836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1349907.pem
	I1217 12:18:08.630019 1383836 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 12:18:08.653536 1383836 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 12:18:08.666006 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 12:18:08.679610 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 12:18:08.693271 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 12:18:08.704893 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 12:18:08.713855 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 12:18:08.724142 1383836 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 12:18:08.735084 1383836 kubeadm.go:401] StartCluster: {Name:pause-137189 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 Cl
usterName:pause-137189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.45 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 12:18:08.735292 1383836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 12:18:08.735358 1383836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 12:18:08.791645 1383836 cri.go:89] found id: "310d732afecf22f7a55f5b9312ad9e71118394ff09fc9f7d7c3eaf2de48cad02"
	I1217 12:18:08.791672 1383836 cri.go:89] found id: "d958a10e60bb18b7c6cfef7e922ec6c511df7903bff6d3fe4b2efb6fb756059c"
	I1217 12:18:08.791677 1383836 cri.go:89] found id: "1944d91c94e5183e69b38181a36718fe96c0be4386a877f00873165f1ee8b0b9"
	I1217 12:18:08.791699 1383836 cri.go:89] found id: "0b055307c937cef89a52e812a0b2a6ef7b83b6907d8c9cd10303092d207d0795"
	I1217 12:18:08.791703 1383836 cri.go:89] found id: "d3b342c3641fa821eadfb0cc69320076516baa945a7859a71b098f85087a5809"
	I1217 12:18:08.791709 1383836 cri.go:89] found id: "e1ade8faaa4b5b905c5a7436d0db742ad1837dde6e3fb0d4c61c936242632f16"
	I1217 12:18:08.791714 1383836 cri.go:89] found id: "0efd0e07325d21b417fc524dc11c66a45c3ed8db4fe88ebeed1de2dad9969f68"
	I1217 12:18:08.791718 1383836 cri.go:89] found id: "efc4e6ac4add4a3d2e1c7ae474271d1f76d922e4d443a1d8880e722d4469f383"
	I1217 12:18:08.791722 1383836 cri.go:89] found id: "166a9985e700638b97cb2541dc51b9d8a9c04973af2c6bedc9713270addf8697"
	I1217 12:18:08.791739 1383836 cri.go:89] found id: "119b3f1b9c1651145ae076affb70e219939b71e58a4f9e72b0af00646d803e4d"
	I1217 12:18:08.791752 1383836 cri.go:89] found id: "686717c825f6ddedcf110c0e997874c12e953f5c4803eccb336ff9aa50b1b3e1"
	I1217 12:18:08.791757 1383836 cri.go:89] found id: ""
	I1217 12:18:08.791821 1383836 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-137189 -n pause-137189
helpers_test.go:270: (dbg) Run:  kubectl --context pause-137189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (73.70s)

                                                
                                    

Test pass (376/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 25.29
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.3/json-events 11.36
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.08
18 TestDownloadOnly/v1.34.3/DeleteAll 0.16
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-rc.1/json-events 10.53
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.11
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.68
31 TestOffline 71.26
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 126.22
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 11.51
44 TestAddons/parallel/Registry 18.42
45 TestAddons/parallel/RegistryCreds 0.76
47 TestAddons/parallel/InspektorGadget 11.03
48 TestAddons/parallel/MetricsServer 6.15
50 TestAddons/parallel/CSI 45.88
51 TestAddons/parallel/Headlamp 19.14
52 TestAddons/parallel/CloudSpanner 5.51
53 TestAddons/parallel/LocalPath 56.76
54 TestAddons/parallel/NvidiaDevicePlugin 6.91
55 TestAddons/parallel/Yakd 11.83
57 TestAddons/StoppedEnableDisable 81.54
58 TestCertOptions 74.53
59 TestCertExpiration 286.45
61 TestForceSystemdFlag 78.9
62 TestForceSystemdEnv 38.44
67 TestErrorSpam/setup 35.2
68 TestErrorSpam/start 0.34
69 TestErrorSpam/status 0.65
70 TestErrorSpam/pause 1.47
71 TestErrorSpam/unpause 1.64
72 TestErrorSpam/stop 4.93
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 48.13
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 38.72
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.04
84 TestFunctional/serial/CacheCmd/cache/add_local 2.2
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.48
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 33.02
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.24
95 TestFunctional/serial/LogsFileCmd 1.2
96 TestFunctional/serial/InvalidService 4.04
98 TestFunctional/parallel/ConfigCmd 0.45
99 TestFunctional/parallel/DashboardCmd 41.92
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.87
106 TestFunctional/parallel/ServiceCmdConnect 11.41
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 41.3
110 TestFunctional/parallel/SSHCmd 0.32
111 TestFunctional/parallel/CpCmd 1.2
112 TestFunctional/parallel/MySQL 36.52
113 TestFunctional/parallel/FileSync 0.16
114 TestFunctional/parallel/CertSync 0.99
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.35
122 TestFunctional/parallel/License 0.4
123 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
124 TestFunctional/parallel/Version/short 0.07
125 TestFunctional/parallel/Version/components 0.39
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.75
131 TestFunctional/parallel/ImageCommands/Setup 1.98
132 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
136 TestFunctional/parallel/ProfileCmd/profile_list 0.31
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.7
145 TestFunctional/parallel/MountCmd/any-port 30.09
146 TestFunctional/parallel/ServiceCmd/List 0.91
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.85
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
149 TestFunctional/parallel/ServiceCmd/Format 0.28
150 TestFunctional/parallel/ServiceCmd/URL 0.28
160 TestFunctional/parallel/MountCmd/specific-port 1.4
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.43
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 72.5
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 36.66
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.08
176 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 3.54
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 2.26
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.51
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.14
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 99.85
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.3
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.3
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.09
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.46
192 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 15.37
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.25
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.13
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 0.71
199 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 26.82
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.17
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 48.64
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.41
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.22
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 37.62
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.17
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.19
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.08
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.39
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.39
216 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 10.21
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.4
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.21
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.24
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.21
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.35
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 13.77
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.93
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.34
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.36
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 9.39
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.62
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.33
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.92
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.08
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.08
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.08
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.8
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.51
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.45
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.76
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.56
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 0.47
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.45
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.61
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.24
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.23
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.24
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.25
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 230.43
262 TestMultiControlPlane/serial/DeployApp 6.78
263 TestMultiControlPlane/serial/PingHostFromPods 1.33
264 TestMultiControlPlane/serial/AddWorkerNode 48.8
265 TestMultiControlPlane/serial/NodeLabels 0.08
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
267 TestMultiControlPlane/serial/CopyFile 10.89
268 TestMultiControlPlane/serial/StopSecondaryNode 84.66
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
270 TestMultiControlPlane/serial/RestartSecondaryNode 31.71
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 363.66
273 TestMultiControlPlane/serial/DeleteSecondaryNode 17.9
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
275 TestMultiControlPlane/serial/StopCluster 248.17
276 TestMultiControlPlane/serial/RestartCluster 96.7
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.5
278 TestMultiControlPlane/serial/AddSecondaryNode 103.6
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
284 TestJSONOutput/start/Command 47.17
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.69
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.64
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.83
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.23
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 75.36
316 TestMountStart/serial/StartWithMountFirst 20.34
317 TestMountStart/serial/VerifyMountFirst 0.32
318 TestMountStart/serial/StartWithMountSecond 21.29
319 TestMountStart/serial/VerifyMountSecond 0.31
320 TestMountStart/serial/DeleteFirst 0.7
321 TestMountStart/serial/VerifyMountPostDelete 0.3
322 TestMountStart/serial/Stop 1.24
323 TestMountStart/serial/RestartStopped 18.58
324 TestMountStart/serial/VerifyMountPostStop 0.3
327 TestMultiNode/serial/FreshStart2Nodes 99.37
328 TestMultiNode/serial/DeployApp2Nodes 6.02
329 TestMultiNode/serial/PingHostFrom2Pods 0.86
330 TestMultiNode/serial/AddNode 45.38
331 TestMultiNode/serial/MultiNodeLabels 0.07
332 TestMultiNode/serial/ProfileList 0.43
333 TestMultiNode/serial/CopyFile 5.88
334 TestMultiNode/serial/StopNode 2.26
335 TestMultiNode/serial/StartAfterStop 36.91
336 TestMultiNode/serial/RestartKeepsNodes 292.29
337 TestMultiNode/serial/DeleteNode 2.58
338 TestMultiNode/serial/StopMultiNode 165.12
339 TestMultiNode/serial/RestartMultiNode 93.1
340 TestMultiNode/serial/ValidateNameConflict 38.26
347 TestScheduledStopUnix 106.87
351 TestRunningBinaryUpgrade 349.89
353 TestKubernetesUpgrade 114.35
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestNoKubernetes/serial/StartWithK8s 93.84
358 TestNoKubernetes/serial/StartWithStopK8s 49.51
359 TestNoKubernetes/serial/Start 29.97
360 TestStoppedBinaryUpgrade/Setup 3.2
361 TestStoppedBinaryUpgrade/Upgrade 107.22
362 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
363 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
364 TestNoKubernetes/serial/ProfileList 10.12
365 TestNoKubernetes/serial/Stop 1.42
366 TestNoKubernetes/serial/StartNoArgs 30.39
367 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
382 TestNetworkPlugins/group/false 3.81
387 TestPause/serial/Start 105.99
388 TestStoppedBinaryUpgrade/MinikubeLogs 1
389 TestISOImage/Setup 30.93
391 TestISOImage/Binaries/crictl 0.2
392 TestISOImage/Binaries/curl 0.17
393 TestISOImage/Binaries/docker 0.19
394 TestISOImage/Binaries/git 0.19
395 TestISOImage/Binaries/iptables 0.2
396 TestISOImage/Binaries/podman 0.2
397 TestISOImage/Binaries/rsync 0.19
398 TestISOImage/Binaries/socat 0.2
399 TestISOImage/Binaries/wget 0.21
400 TestISOImage/Binaries/VBoxControl 0.2
401 TestISOImage/Binaries/VBoxService 0.19
403 TestStartStop/group/old-k8s-version/serial/FirstStart 109.7
405 TestStartStop/group/no-preload/serial/FirstStart 101.76
407 TestStartStop/group/old-k8s-version/serial/DeployApp 11.4
409 TestStartStop/group/embed-certs/serial/FirstStart 79.48
410 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
411 TestStartStop/group/old-k8s-version/serial/Stop 81.15
412 TestStartStop/group/no-preload/serial/DeployApp 10.32
413 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.82
414 TestStartStop/group/no-preload/serial/Stop 85.36
415 TestStartStop/group/embed-certs/serial/DeployApp 10.29
416 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
417 TestStartStop/group/embed-certs/serial/Stop 83.83
418 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
419 TestStartStop/group/old-k8s-version/serial/SecondStart 48.78
420 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
421 TestStartStop/group/no-preload/serial/SecondStart 54.34
423 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 95.64
424 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 13.01
425 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
426 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
427 TestStartStop/group/old-k8s-version/serial/Pause 3.12
429 TestStartStop/group/newest-cni/serial/FirstStart 44.61
430 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.01
431 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
432 TestStartStop/group/embed-certs/serial/SecondStart 57.11
433 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
434 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.19
435 TestStartStop/group/no-preload/serial/Pause 2.62
436 TestNetworkPlugins/group/auto/Start 75.71
437 TestStartStop/group/newest-cni/serial/DeployApp 0
438 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
439 TestStartStop/group/newest-cni/serial/Stop 8.55
440 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
441 TestStartStop/group/newest-cni/serial/SecondStart 44.55
442 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.36
443 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
444 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.56
445 TestStartStop/group/default-k8s-diff-port/serial/Stop 83.12
446 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
447 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
448 TestStartStop/group/embed-certs/serial/Pause 3.29
449 TestNetworkPlugins/group/kindnet/Start 64.05
450 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
452 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
453 TestStartStop/group/newest-cni/serial/Pause 3.23
454 TestNetworkPlugins/group/auto/KubeletFlags 0.21
455 TestNetworkPlugins/group/auto/NetCatPod 11.29
456 TestNetworkPlugins/group/calico/Start 95.71
457 TestNetworkPlugins/group/auto/DNS 0.15
458 TestNetworkPlugins/group/auto/Localhost 0.12
459 TestNetworkPlugins/group/auto/HairPin 0.13
460 TestNetworkPlugins/group/custom-flannel/Start 72.76
461 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
462 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.87
463 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
464 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
465 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
466 TestNetworkPlugins/group/kindnet/DNS 0.2
467 TestNetworkPlugins/group/kindnet/Localhost 0.18
468 TestNetworkPlugins/group/kindnet/HairPin 0.14
469 TestNetworkPlugins/group/enable-default-cni/Start 82.32
470 TestNetworkPlugins/group/calico/ControllerPod 6.01
471 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
472 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
473 TestNetworkPlugins/group/calico/KubeletFlags 0.2
474 TestNetworkPlugins/group/calico/NetCatPod 11.81
475 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.15
476 TestNetworkPlugins/group/custom-flannel/DNS 0.21
477 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
478 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
479 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
480 TestNetworkPlugins/group/calico/DNS 0.23
481 TestNetworkPlugins/group/calico/Localhost 0.16
482 TestNetworkPlugins/group/calico/HairPin 0.15
483 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
484 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.17
485 TestNetworkPlugins/group/flannel/Start 64.9
486 TestNetworkPlugins/group/bridge/Start 107.73
488 TestISOImage/PersistentMounts//data 0.17
489 TestISOImage/PersistentMounts//var/lib/docker 0.2
490 TestISOImage/PersistentMounts//var/lib/cni 0.22
491 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
492 TestISOImage/PersistentMounts//var/lib/minikube 0.17
493 TestISOImage/PersistentMounts//var/lib/toolbox 0.23
494 TestISOImage/PersistentMounts//var/lib/boot2docker 0.2
495 TestISOImage/VersionJSON 0.21
496 TestISOImage/eBPFSupport 0.18
497 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
498 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
499 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
500 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
501 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
502 TestNetworkPlugins/group/flannel/ControllerPod 6.01
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
504 TestNetworkPlugins/group/flannel/NetCatPod 10.23
505 TestNetworkPlugins/group/flannel/DNS 0.14
506 TestNetworkPlugins/group/flannel/Localhost 0.12
507 TestNetworkPlugins/group/flannel/HairPin 0.13
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
509 TestNetworkPlugins/group/bridge/NetCatPod 11.21
510 TestNetworkPlugins/group/bridge/DNS 0.14
511 TestNetworkPlugins/group/bridge/Localhost 0.12
512 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (25.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-349022 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-349022 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (25.289320249s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (25.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 11:14:55.628267 1349907 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 11:14:55.628395 1349907 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-349022
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-349022: exit status 85 (77.048594ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-349022 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-349022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:14:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:14:30.392897 1349919 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:14:30.393009 1349919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:30.393015 1349919 out.go:374] Setting ErrFile to fd 2...
	I1217 11:14:30.393020 1349919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:30.393194 1349919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	W1217 11:14:30.393320 1349919 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21808-1345916/.minikube/config/config.json: open /home/jenkins/minikube-integration/21808-1345916/.minikube/config/config.json: no such file or directory
	I1217 11:14:30.393784 1349919 out.go:368] Setting JSON to true
	I1217 11:14:30.394758 1349919 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17809,"bootTime":1765952261,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:14:30.394809 1349919 start.go:143] virtualization: kvm guest
	I1217 11:14:30.399956 1349919 out.go:99] [download-only-349022] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1217 11:14:30.400123 1349919 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 11:14:30.400157 1349919 notify.go:221] Checking for updates...
	I1217 11:14:30.401147 1349919 out.go:171] MINIKUBE_LOCATION=21808
	I1217 11:14:30.402190 1349919 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:14:30.403177 1349919 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 11:14:30.404120 1349919 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:14:30.405062 1349919 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 11:14:30.406809 1349919 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 11:14:30.407058 1349919 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:14:30.438162 1349919 out.go:99] Using the kvm2 driver based on user configuration
	I1217 11:14:30.438194 1349919 start.go:309] selected driver: kvm2
	I1217 11:14:30.438203 1349919 start.go:927] validating driver "kvm2" against <nil>
	I1217 11:14:30.438523 1349919 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:14:30.439039 1349919 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 11:14:30.439204 1349919 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 11:14:30.439256 1349919 cni.go:84] Creating CNI manager for ""
	I1217 11:14:30.439318 1349919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 11:14:30.439329 1349919 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 11:14:30.439381 1349919 start.go:353] cluster config:
	{Name:download-only-349022 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-349022 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:14:30.439619 1349919 iso.go:125] acquiring lock: {Name:mkf3f94e126ae38d32753ef0086ea24e79e9b483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:14:30.441019 1349919 out.go:99] Downloading VM boot image ...
	I1217 11:14:30.441054 1349919 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 11:14:41.977275 1349919 out.go:99] Starting "download-only-349022" primary control-plane node in "download-only-349022" cluster
	I1217 11:14:41.977322 1349919 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:14:42.080324 1349919 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 11:14:42.080359 1349919 cache.go:65] Caching tarball of preloaded images
	I1217 11:14:42.080585 1349919 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:14:42.082370 1349919 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 11:14:42.082392 1349919 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 11:14:42.191728 1349919 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1217 11:14:42.191860 1349919 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 11:14:54.585593 1349919 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1217 11:14:54.585978 1349919 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/download-only-349022/config.json ...
	I1217 11:14:54.586032 1349919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/download-only-349022/config.json: {Name:mkb9af97180f36306c5eaced0175816ced4f3900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:14:54.586218 1349919 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 11:14:54.586388 1349919 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-349022 host does not exist
	  To start a cluster, run: "minikube start -p download-only-349022"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-349022
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (11.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-999267 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-999267 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.362765737s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (11.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1217 11:15:07.373207 1349907 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 11:15:07.373259 1349907 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-999267
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-999267: exit status 85 (77.151797ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-349022 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-349022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ delete  │ -p download-only-349022                                                                                                                                                 │ download-only-349022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ start   │ -o=json --download-only -p download-only-999267 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-999267 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:14:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:14:56.063930 1350165 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:14:56.064043 1350165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:56.064052 1350165 out.go:374] Setting ErrFile to fd 2...
	I1217 11:14:56.064057 1350165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:14:56.064266 1350165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:14:56.064707 1350165 out.go:368] Setting JSON to true
	I1217 11:14:56.065625 1350165 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17835,"bootTime":1765952261,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:14:56.065679 1350165 start.go:143] virtualization: kvm guest
	I1217 11:14:56.067556 1350165 out.go:99] [download-only-999267] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:14:56.067748 1350165 notify.go:221] Checking for updates...
	I1217 11:14:56.068880 1350165 out.go:171] MINIKUBE_LOCATION=21808
	I1217 11:14:56.070209 1350165 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:14:56.071274 1350165 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 11:14:56.072407 1350165 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:14:56.073475 1350165 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 11:14:56.075143 1350165 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 11:14:56.075380 1350165 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:14:56.105372 1350165 out.go:99] Using the kvm2 driver based on user configuration
	I1217 11:14:56.105408 1350165 start.go:309] selected driver: kvm2
	I1217 11:14:56.105414 1350165 start.go:927] validating driver "kvm2" against <nil>
	I1217 11:14:56.105739 1350165 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:14:56.106260 1350165 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 11:14:56.106391 1350165 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 11:14:56.106419 1350165 cni.go:84] Creating CNI manager for ""
	I1217 11:14:56.106467 1350165 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 11:14:56.106476 1350165 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 11:14:56.106509 1350165 start.go:353] cluster config:
	{Name:download-only-999267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-999267 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:14:56.106598 1350165 iso.go:125] acquiring lock: {Name:mkf3f94e126ae38d32753ef0086ea24e79e9b483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:14:56.107774 1350165 out.go:99] Starting "download-only-999267" primary control-plane node in "download-only-999267" cluster
	I1217 11:14:56.107829 1350165 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:14:56.220064 1350165 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 11:14:56.220092 1350165 cache.go:65] Caching tarball of preloaded images
	I1217 11:14:56.220269 1350165 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 11:14:56.221942 1350165 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1217 11:14:56.221966 1350165 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 11:14:56.336269 1350165 preload.go:295] Got checksum from GCS API "fdea575627999e8631bb8fa579d884c7"
	I1217 11:14:56.336317 1350165 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:fdea575627999e8631bb8fa579d884c7 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-999267 host does not exist
	  To start a cluster, run: "minikube start -p download-only-999267"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-999267
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (10.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-783543 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-783543 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.531711779s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (10.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1217 11:15:18.287875 1349907 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1217 11:15:18.287932 1349907 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-783543
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-783543: exit status 85 (104.906926ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                     │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-349022 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-349022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ delete  │ -p download-only-349022                                                                                                                                                      │ download-only-349022 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │ 17 Dec 25 11:14 UTC │
	│ start   │ -o=json --download-only -p download-only-999267 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio      │ download-only-999267 │ jenkins │ v1.37.0 │ 17 Dec 25 11:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                        │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ delete  │ -p download-only-999267                                                                                                                                                      │ download-only-999267 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │ 17 Dec 25 11:15 UTC │
	│ start   │ -o=json --download-only -p download-only-783543 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-783543 │ jenkins │ v1.37.0 │ 17 Dec 25 11:15 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 11:15:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 11:15:07.809522 1350361 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:15:07.809792 1350361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:15:07.809803 1350361 out.go:374] Setting ErrFile to fd 2...
	I1217 11:15:07.809807 1350361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:15:07.809975 1350361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:15:07.810456 1350361 out.go:368] Setting JSON to true
	I1217 11:15:07.811365 1350361 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17847,"bootTime":1765952261,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:15:07.811422 1350361 start.go:143] virtualization: kvm guest
	I1217 11:15:07.813257 1350361 out.go:99] [download-only-783543] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:15:07.813379 1350361 notify.go:221] Checking for updates...
	I1217 11:15:07.814610 1350361 out.go:171] MINIKUBE_LOCATION=21808
	I1217 11:15:07.815665 1350361 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:15:07.816699 1350361 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 11:15:07.817643 1350361 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:15:07.818667 1350361 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 11:15:07.820576 1350361 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 11:15:07.820801 1350361 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:15:07.851087 1350361 out.go:99] Using the kvm2 driver based on user configuration
	I1217 11:15:07.851130 1350361 start.go:309] selected driver: kvm2
	I1217 11:15:07.851140 1350361 start.go:927] validating driver "kvm2" against <nil>
	I1217 11:15:07.851603 1350361 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 11:15:07.852327 1350361 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 11:15:07.852521 1350361 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 11:15:07.852569 1350361 cni.go:84] Creating CNI manager for ""
	I1217 11:15:07.852632 1350361 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 11:15:07.852646 1350361 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 11:15:07.852694 1350361 start.go:353] cluster config:
	{Name:download-only-783543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-783543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:15:07.852835 1350361 iso.go:125] acquiring lock: {Name:mkf3f94e126ae38d32753ef0086ea24e79e9b483 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 11:15:07.854052 1350361 out.go:99] Starting "download-only-783543" primary control-plane node in "download-only-783543" cluster
	I1217 11:15:07.854073 1350361 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:15:07.957804 1350361 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 11:15:07.957860 1350361 cache.go:65] Caching tarball of preloaded images
	I1217 11:15:07.958131 1350361 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:15:07.959823 1350361 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1217 11:15:07.959845 1350361 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 11:15:08.068767 1350361 preload.go:295] Got checksum from GCS API "46a82b10f18f180acaede5af8ca381a9"
	I1217 11:15:08.068816 1350361 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:46a82b10f18f180acaede5af8ca381a9 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 11:15:17.212424 1350361 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 11:15:17.212906 1350361 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/download-only-783543/config.json ...
	I1217 11:15:17.212966 1350361 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/download-only-783543/config.json: {Name:mk496767884d68848ef62a68d1ade0864027819e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 11:15:17.213257 1350361 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 11:15:17.213531 1350361 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl
	
	
	* The control-plane node download-only-783543 host does not exist
	  To start a cluster, run: "minikube start -p download-only-783543"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-783543
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 11:15:19.189925 1349907 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-200790 --alsologtostderr --binary-mirror http://127.0.0.1:44955 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-200790" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-200790
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (71.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-323658 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-323658 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.348949383s)
helpers_test.go:176: Cleaning up "offline-crio-323658" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-323658
--- PASS: TestOffline (71.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-410268
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-410268: exit status 85 (75.621352ms)

                                                
                                                
-- stdout --
	* Profile "addons-410268" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-410268"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-410268
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-410268: exit status 85 (74.779253ms)

                                                
                                                
-- stdout --
	* Profile "addons-410268" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-410268"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (126.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-410268 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-410268 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m6.217363246s)
--- PASS: TestAddons/Setup (126.22s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-410268 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-410268 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-410268 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-410268 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [89b289cf-cd57-4583-9745-2ff3ad4a62ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [89b289cf-cd57-4583-9745-2ff3ad4a62ac] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004073321s
addons_test.go:696: (dbg) Run:  kubectl --context addons-410268 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-410268 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-410268 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.621624ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-zzpqs" [5234c3bf-e000-4d51-80db-779c52aba6bd] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003839035s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-tgq9f" [acc44f29-6589-4709-855b-7ecb669c57b3] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003170628s
addons_test.go:394: (dbg) Run:  kubectl --context addons-410268 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-410268 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-410268 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.671072167s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 ip
2025/12/17 11:18:04 [DEBUG] GET http://192.168.39.28:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.42s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.250873ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-410268
addons_test.go:334: (dbg) Run:  kubectl --context addons-410268 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-2nlrj" [82fdbc8c-8d7a-4522-930a-30d0dd3ab58c] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004008195s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable inspektor-gadget --alsologtostderr -v=1: (6.028706328s)
--- PASS: TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.71266ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-wzdd7" [45eadf4d-9bab-4bbf-88c7-99c4433a113d] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003765563s
addons_test.go:465: (dbg) Run:  kubectl --context addons-410268 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable metrics-server --alsologtostderr -v=1: (1.069281487s)
--- PASS: TestAddons/parallel/MetricsServer (6.15s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 11:18:06.116548 1349907 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 11:18:06.123684 1349907 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 11:18:06.123713 1349907 kapi.go:107] duration metric: took 7.182591ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.192844ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-410268 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-410268 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [6809fb24-174c-4c3c-9650-9a6225b5ed43] Pending
helpers_test.go:353: "task-pv-pod" [6809fb24-174c-4c3c-9650-9a6225b5ed43] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [6809fb24-174c-4c3c-9650-9a6225b5ed43] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003213285s
addons_test.go:574: (dbg) Run:  kubectl --context addons-410268 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-410268 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-410268 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-410268 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-410268 delete pod task-pv-pod: (1.338960692s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-410268 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-410268 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-410268 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [92e53a6a-48e4-41ad-ba23-729f631c1bf4] Pending
helpers_test.go:353: "task-pv-pod-restore" [92e53a6a-48e4-41ad-ba23-729f631c1bf4] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004152156s
addons_test.go:616: (dbg) Run:  kubectl --context addons-410268 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-410268 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-410268 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.791920038s)
--- PASS: TestAddons/parallel/CSI (45.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-410268 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-l6h4r" [24a95210-10d3-4de1-802a-92424ebfe63b] Pending
helpers_test.go:353: "headlamp-dfcdc64b-l6h4r" [24a95210-10d3-4de1-802a-92424ebfe63b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-l6h4r" [24a95210-10d3-4de1-802a-92424ebfe63b] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-l6h4r" [24a95210-10d3-4de1-802a-92424ebfe63b] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.006695182s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable headlamp --alsologtostderr -v=1: (6.267811876s)
--- PASS: TestAddons/parallel/Headlamp (19.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-kjv5b" [5e828a1b-b74b-4a0d-8a75-b213bbfd1365] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003953198s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.76s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-410268 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-410268 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-410268 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [36b6500c-0cab-43b2-a1ae-18aeede8d155] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [36b6500c-0cab-43b2-a1ae-18aeede8d155] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [36b6500c-0cab-43b2-a1ae-18aeede8d155] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003399846s
addons_test.go:969: (dbg) Run:  kubectl --context addons-410268 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 ssh "cat /opt/local-path-provisioner/pvc-b4fbc5e0-3297-44da-8635-bcba4bc247bc_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-410268 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-410268 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.879012163s)
--- PASS: TestAddons/parallel/LocalPath (56.76s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.91s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-5czqh" [22222c18-08cb-4be5-93fc-4e2715120b95] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006394964s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.91s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-9sljq" [e55ee1a9-0099-4413-bb37-008f06ae16d9] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003982893s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-410268 addons disable yakd --alsologtostderr -v=1: (5.822490707s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (81.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-410268
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-410268: (1m21.336802271s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-410268
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-410268
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-410268
--- PASS: TestAddons/StoppedEnableDisable (81.54s)

                                                
                                    
x
+
TestCertOptions (74.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-218423 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-218423 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.117744291s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-218423 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-218423 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-218423 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-218423" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-218423
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-218423: (1.002393459s)
--- PASS: TestCertOptions (74.53s)

                                                
                                    
x
+
TestCertExpiration (286.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-026544 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-026544 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m15.54858361s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-026544 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-026544 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (29.997256871s)
helpers_test.go:176: Cleaning up "cert-expiration-026544" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-026544
--- PASS: TestCertExpiration (286.45s)

                                                
                                    
x
+
TestForceSystemdFlag (78.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-426266 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-426266 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.827251531s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-426266 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-426266" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-426266
--- PASS: TestForceSystemdFlag (78.90s)

                                                
                                    
x
+
TestForceSystemdEnv (38.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-387446 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-387446 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (36.572643588s)
helpers_test.go:176: Cleaning up "force-systemd-env-387446" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-387446
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-387446: (1.868638292s)
--- PASS: TestForceSystemdEnv (38.44s)

                                                
                                    
x
+
TestErrorSpam/setup (35.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-262259 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-262259 --driver=kvm2  --container-runtime=crio
E1217 11:22:27.382007 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:27.391348 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:27.403137 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:27.424520 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:27.466029 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:27.547507 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:27.709074 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:28.030782 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:22:28.673039 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-262259 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-262259 --driver=kvm2  --container-runtime=crio: (35.20352787s)
--- PASS: TestErrorSpam/setup (35.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 status
E1217 11:22:29.955010 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 unpause
E1217 11:22:32.516796 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (4.93s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 stop: (1.862935015s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 stop: (1.800337188s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 stop
E1217 11:22:37.638593 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-262259 --log_dir /tmp/nospam-262259 stop: (1.266732776s)
--- PASS: TestErrorSpam/stop (4.93s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/test/nested/copy/1349907/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843867 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1217 11:22:47.880444 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:23:08.362502 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-843867 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (48.12856355s)
--- PASS: TestFunctional/serial/StartWithProxy (48.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 11:23:26.957924 1349907 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843867 --alsologtostderr -v=8
E1217 11:23:49.325249 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-843867 --alsologtostderr -v=8: (38.721510108s)
functional_test.go:678: soft start took 38.722339418s for "functional-843867" cluster.
I1217 11:24:05.679815 1349907 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (38.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-843867 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-843867 cache add registry.k8s.io/pause:3.1: (1.05835775s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-843867 cache add registry.k8s.io/pause:3.3: (1.02496217s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-843867 /tmp/TestFunctionalserialCacheCmdcacheadd_local2474022714/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 cache add minikube-local-cache-test:functional-843867
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-843867 cache add minikube-local-cache-test:functional-843867: (1.870097359s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 cache delete minikube-local-cache-test:functional-843867
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-843867
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (169.460294ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 kubectl -- --context functional-843867 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-843867 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843867 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-843867 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.020505594s)
functional_test.go:776: restart took 33.020642491s for "functional-843867" cluster.
I1217 11:24:46.228367 1349907 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (33.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-843867 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-843867 logs: (1.236288902s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 logs --file /tmp/TestFunctionalserialLogsFileCmd1669183850/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-843867 logs --file /tmp/TestFunctionalserialLogsFileCmd1669183850/001/logs.txt: (1.198328818s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-843867 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-843867
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-843867: exit status 115 (223.130442ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.235:31208 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-843867 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 config get cpus: exit status 14 (68.836087ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 config get cpus: exit status 14 (67.733245ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (41.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-843867 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-843867 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1355945: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (41.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843867 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-843867 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (116.908861ms)

                                                
                                                
-- stdout --
	* [functional-843867] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:24:55.977761 1355867 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:24:55.977874 1355867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:24:55.977883 1355867 out.go:374] Setting ErrFile to fd 2...
	I1217 11:24:55.977887 1355867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:24:55.978148 1355867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:24:55.978608 1355867 out.go:368] Setting JSON to false
	I1217 11:24:55.979619 1355867 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18435,"bootTime":1765952261,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:24:55.979672 1355867 start.go:143] virtualization: kvm guest
	I1217 11:24:55.983115 1355867 out.go:179] * [functional-843867] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:24:55.984288 1355867 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:24:55.984300 1355867 notify.go:221] Checking for updates...
	I1217 11:24:55.986380 1355867 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:24:55.987483 1355867 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 11:24:55.988656 1355867 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:24:55.989690 1355867 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:24:55.990717 1355867 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:24:55.992321 1355867 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:24:55.992796 1355867 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:24:56.026034 1355867 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 11:24:56.027229 1355867 start.go:309] selected driver: kvm2
	I1217 11:24:56.027244 1355867 start.go:927] validating driver "kvm2" against &{Name:functional-843867 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-843867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:24:56.027345 1355867 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:24:56.029052 1355867 out.go:203] 
	W1217 11:24:56.030161 1355867 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 11:24:56.031061 1355867 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843867 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-843867 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-843867 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (115.691914ms)

                                                
                                                
-- stdout --
	* [functional-843867] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:24:56.213197 1355897 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:24:56.213330 1355897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:24:56.213341 1355897 out.go:374] Setting ErrFile to fd 2...
	I1217 11:24:56.213347 1355897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:24:56.213659 1355897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:24:56.214141 1355897 out.go:368] Setting JSON to false
	I1217 11:24:56.215090 1355897 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18435,"bootTime":1765952261,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:24:56.215145 1355897 start.go:143] virtualization: kvm guest
	I1217 11:24:56.216547 1355897 out.go:179] * [functional-843867] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 11:24:56.217763 1355897 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:24:56.217768 1355897 notify.go:221] Checking for updates...
	I1217 11:24:56.220278 1355897 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:24:56.221280 1355897 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 11:24:56.222301 1355897 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:24:56.226411 1355897 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:24:56.227440 1355897 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:24:56.229012 1355897 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:24:56.229753 1355897 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:24:56.260717 1355897 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 11:24:56.261732 1355897 start.go:309] selected driver: kvm2
	I1217 11:24:56.261748 1355897 start.go:927] validating driver "kvm2" against &{Name:functional-843867 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-843867 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.235 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:24:56.261876 1355897 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:24:56.263726 1355897 out.go:203] 
	W1217 11:24:56.265302 1355897 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 11:24:56.266290 1355897 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-843867 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-843867 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-dvhtw" [d5ce4316-7fb4-44e4-8d8b-3f886fef3032] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-dvhtw" [d5ce4316-7fb4-44e4-8d8b-3f886fef3032] Running
2025/12/17 11:25:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00352634s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.235:30950
functional_test.go:1680: http://192.168.39.235:30950: success! body:
Request served by hello-node-connect-7d85dfc575-dvhtw

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.235:30950
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.41s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [3d0ac308-3f98-40c9-a88a-4744b9f3a5b1] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004771441s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-843867 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-843867 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-843867 get pvc myclaim -o=json
I1217 11:25:11.173468 1349907 retry.go:31] will retry after 1.876901605s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:c4105a15-c355-4ed0-802f-171859f892bd ResourceVersion:762 Generation:0 CreationTimestamp:2025-12-17 11:25:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0016c4e60 VolumeMode:0xc0016c4e70 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
E1217 11:25:11.247175 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-843867 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-843867 apply -f testdata/storage-provisioner/pod.yaml
I1217 11:25:13.644670 1349907 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [29cd5a4c-fb23-44e9-8be4-d504355fbed3] Pending
helpers_test.go:353: "sp-pod" [29cd5a4c-fb23-44e9-8be4-d504355fbed3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [29cd5a4c-fb23-44e9-8be4-d504355fbed3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.005153008s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-843867 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-843867 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-843867 apply -f testdata/storage-provisioner/pod.yaml
I1217 11:25:40.772659 1349907 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [0b6bf2ad-d61f-4d8e-80b5-48d2d8a7cb0a] Pending
helpers_test.go:353: "sp-pod" [0b6bf2ad-d61f-4d8e-80b5-48d2d8a7cb0a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004477907s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-843867 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh -n functional-843867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 cp functional-843867:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3225407194/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh -n functional-843867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh -n functional-843867 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-843867 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-pn22t" [223f7253-8344-452e-80bb-2e1e76bb1b88] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-pn22t" [223f7253-8344-452e-80bb-2e1e76bb1b88] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.003859961s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843867 exec mysql-6bcdcbc558-pn22t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843867 exec mysql-6bcdcbc558-pn22t -- mysql -ppassword -e "show databases;": exit status 1 (160.50397ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:25:20.572606 1349907 retry.go:31] will retry after 1.067060822s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843867 exec mysql-6bcdcbc558-pn22t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843867 exec mysql-6bcdcbc558-pn22t -- mysql -ppassword -e "show databases;": exit status 1 (214.456506ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:25:21.854664 1349907 retry.go:31] will retry after 1.277185152s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843867 exec mysql-6bcdcbc558-pn22t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843867 exec mysql-6bcdcbc558-pn22t -- mysql -ppassword -e "show databases;": exit status 1 (195.548042ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:25:23.328608 1349907 retry.go:31] will retry after 3.323060952s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843867 exec mysql-6bcdcbc558-pn22t -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-843867 exec mysql-6bcdcbc558-pn22t -- mysql -ppassword -e "show databases;": exit status 1 (161.534353ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:25:26.813526 1349907 retry.go:31] will retry after 3.767808513s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-843867 exec mysql-6bcdcbc558-pn22t -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1349907/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo cat /etc/test/nested/copy/1349907/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1349907.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo cat /etc/ssl/certs/1349907.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1349907.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo cat /usr/share/ca-certificates/1349907.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/13499072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo cat /etc/ssl/certs/13499072.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/13499072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo cat /usr/share/ca-certificates/13499072.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-843867 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 ssh "sudo systemctl is-active docker": exit status 1 (173.947277ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 ssh "sudo systemctl is-active containerd": exit status 1 (179.337517ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-843867 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-843867 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-pdsxh" [c00a30fa-5cb0-4edf-aee1-04914cb6fce9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-pdsxh" [c00a30fa-5cb0-4edf-aee1-04914cb6fce9] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004273172s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843867 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-843867
localhost/kicbase/echo-server:functional-843867
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843867 image ls --format short --alsologtostderr:
I1217 11:25:35.480754 1356782 out.go:360] Setting OutFile to fd 1 ...
I1217 11:25:35.481061 1356782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:35.481072 1356782 out.go:374] Setting ErrFile to fd 2...
I1217 11:25:35.481079 1356782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:35.481284 1356782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:25:35.481856 1356782 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:35.481999 1356782 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:35.484101 1356782 ssh_runner.go:195] Run: systemctl --version
I1217 11:25:35.486127 1356782 main.go:143] libmachine: domain functional-843867 has defined MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:35.486485 1356782 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:b7:06", ip: ""} in network mk-functional-843867: {Iface:virbr1 ExpiryTime:2025-12-17 12:22:53 +0000 UTC Type:0 Mac:52:54:00:24:b7:06 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-843867 Clientid:01:52:54:00:24:b7:06}
I1217 11:25:35.486510 1356782 main.go:143] libmachine: domain functional-843867 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:35.486636 1356782 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-843867/id_rsa Username:docker}
I1217 11:25:35.566332 1356782 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843867 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.3            │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.3            │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.3            │ aa27095f56193 │ 89.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.3            │ 5826b25d990d7 │ 76MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-843867  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/minikube-local-cache-test     │ functional-843867  │ 432fc0b81a421 │ 3.33kB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843867 image ls --format table --alsologtostderr:
I1217 11:25:38.463323 1356838 out.go:360] Setting OutFile to fd 1 ...
I1217 11:25:38.463605 1356838 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:38.463617 1356838 out.go:374] Setting ErrFile to fd 2...
I1217 11:25:38.463621 1356838 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:38.463810 1356838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:25:38.464442 1356838 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:38.464537 1356838 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:38.466523 1356838 ssh_runner.go:195] Run: systemctl --version
I1217 11:25:38.468680 1356838 main.go:143] libmachine: domain functional-843867 has defined MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:38.469079 1356838 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:b7:06", ip: ""} in network mk-functional-843867: {Iface:virbr1 ExpiryTime:2025-12-17 12:22:53 +0000 UTC Type:0 Mac:52:54:00:24:b7:06 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-843867 Clientid:01:52:54:00:24:b7:06}
I1217 11:25:38.469104 1356838 main.go:143] libmachine: domain functional-843867 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:38.469229 1356838 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-843867/id_rsa Username:docker}
I1217 11:25:38.551928 1356838 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843867 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-843867"],"size":"4945146"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa3
8e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":[],"size":"1462480"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["do
cker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"20d0b
e4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":
["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"432fc0b81a42161b8f60c7a568429559186659dcd8c457f64ecfacf31fa5f73d","repoDigests":["localhost/minikube-local-cache-test@sha256:d3102da41a766cb7b686292643b622e8d05c78cd20246eca6d88b20da173d105"],"repoTags":["localhost/minikube-loca
l-cache-test:functional-843867"],"size":"3328"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53853013"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"si
ze":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843867 image ls --format json --alsologtostderr:
I1217 11:25:38.248203 1356827 out.go:360] Setting OutFile to fd 1 ...
I1217 11:25:38.248495 1356827 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:38.248506 1356827 out.go:374] Setting ErrFile to fd 2...
I1217 11:25:38.248512 1356827 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:38.248695 1356827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:25:38.249292 1356827 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:38.249413 1356827 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:38.251631 1356827 ssh_runner.go:195] Run: systemctl --version
I1217 11:25:38.254047 1356827 main.go:143] libmachine: domain functional-843867 has defined MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:38.254480 1356827 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:b7:06", ip: ""} in network mk-functional-843867: {Iface:virbr1 ExpiryTime:2025-12-17 12:22:53 +0000 UTC Type:0 Mac:52:54:00:24:b7:06 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-843867 Clientid:01:52:54:00:24:b7:06}
I1217 11:25:38.254524 1356827 main.go:143] libmachine: domain functional-843867 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:38.254676 1356827 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-843867/id_rsa Username:docker}
I1217 11:25:38.337789 1356827 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843867 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-843867
size: "4945146"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 432fc0b81a42161b8f60c7a568429559186659dcd8c457f64ecfacf31fa5f73d
repoDigests:
- localhost/minikube-local-cache-test@sha256:d3102da41a766cb7b686292643b622e8d05c78cd20246eca6d88b20da173d105
repoTags:
- localhost/minikube-local-cache-test:functional-843867
size: "3328"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843867 image ls --format yaml --alsologtostderr:
I1217 11:25:35.687715 1356793 out.go:360] Setting OutFile to fd 1 ...
I1217 11:25:35.687973 1356793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:35.687991 1356793 out.go:374] Setting ErrFile to fd 2...
I1217 11:25:35.687996 1356793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:35.688201 1356793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:25:35.688692 1356793 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:35.688793 1356793 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:35.690883 1356793 ssh_runner.go:195] Run: systemctl --version
I1217 11:25:35.693287 1356793 main.go:143] libmachine: domain functional-843867 has defined MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:35.693683 1356793 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:b7:06", ip: ""} in network mk-functional-843867: {Iface:virbr1 ExpiryTime:2025-12-17 12:22:53 +0000 UTC Type:0 Mac:52:54:00:24:b7:06 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-843867 Clientid:01:52:54:00:24:b7:06}
I1217 11:25:35.693721 1356793 main.go:143] libmachine: domain functional-843867 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:35.693863 1356793 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-843867/id_rsa Username:docker}
I1217 11:25:35.775832 1356793 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 ssh pgrep buildkitd: exit status 1 (153.41447ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image build -t localhost/my-image:functional-843867 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-843867 image build -t localhost/my-image:functional-843867 testdata/build --alsologtostderr: (3.411282981s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-843867 image build -t localhost/my-image:functional-843867 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 29e4ad9d352
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-843867
--> 53e88fb1cf8
Successfully tagged localhost/my-image:functional-843867
53e88fb1cf83d2f44129e69be68ebd6f3dd3b09dca71e27966f8c8428f3af095
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-843867 image build -t localhost/my-image:functional-843867 testdata/build --alsologtostderr:
I1217 11:25:36.033568 1356815 out.go:360] Setting OutFile to fd 1 ...
I1217 11:25:36.033697 1356815 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:36.033710 1356815 out.go:374] Setting ErrFile to fd 2...
I1217 11:25:36.033716 1356815 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:25:36.034010 1356815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:25:36.034641 1356815 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:36.035841 1356815 config.go:182] Loaded profile config "functional-843867": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 11:25:36.038891 1356815 ssh_runner.go:195] Run: systemctl --version
I1217 11:25:36.041323 1356815 main.go:143] libmachine: domain functional-843867 has defined MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:36.041760 1356815 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:b7:06", ip: ""} in network mk-functional-843867: {Iface:virbr1 ExpiryTime:2025-12-17 12:22:53 +0000 UTC Type:0 Mac:52:54:00:24:b7:06 Iaid: IPaddr:192.168.39.235 Prefix:24 Hostname:functional-843867 Clientid:01:52:54:00:24:b7:06}
I1217 11:25:36.041789 1356815 main.go:143] libmachine: domain functional-843867 has defined IP address 192.168.39.235 and MAC address 52:54:00:24:b7:06 in network mk-functional-843867
I1217 11:25:36.041945 1356815 sshutil.go:53] new ssh client: &{IP:192.168.39.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-843867/id_rsa Username:docker}
I1217 11:25:36.120518 1356815 build_images.go:162] Building image from path: /tmp/build.1461548042.tar
I1217 11:25:36.120610 1356815 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 11:25:36.134048 1356815 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1461548042.tar
I1217 11:25:36.138878 1356815 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1461548042.tar: stat -c "%s %y" /var/lib/minikube/build/build.1461548042.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1461548042.tar': No such file or directory
I1217 11:25:36.138908 1356815 ssh_runner.go:362] scp /tmp/build.1461548042.tar --> /var/lib/minikube/build/build.1461548042.tar (3072 bytes)
I1217 11:25:36.167743 1356815 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1461548042
I1217 11:25:36.178733 1356815 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1461548042 -xf /var/lib/minikube/build/build.1461548042.tar
I1217 11:25:36.189514 1356815 crio.go:315] Building image: /var/lib/minikube/build/build.1461548042
I1217 11:25:36.189585 1356815 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-843867 /var/lib/minikube/build/build.1461548042 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 11:25:39.355714 1356815 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-843867 /var/lib/minikube/build/build.1461548042 --cgroup-manager=cgroupfs: (3.166092244s)
I1217 11:25:39.355845 1356815 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1461548042
I1217 11:25:39.370570 1356815 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1461548042.tar
I1217 11:25:39.381443 1356815 build_images.go:218] Built localhost/my-image:functional-843867 from /tmp/build.1461548042.tar
I1217 11:25:39.381488 1356815 build_images.go:134] succeeded building to: functional-843867
I1217 11:25:39.381496 1356815 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.958060901s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-843867
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "232.3244ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "73.729317ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "262.466785ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.39911ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image load --daemon kicbase/echo-server:functional-843867 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-843867 image load --daemon kicbase/echo-server:functional-843867 --alsologtostderr: (1.116437693s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image load --daemon kicbase/echo-server:functional-843867 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-843867
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image load --daemon kicbase/echo-server:functional-843867 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image save kicbase/echo-server:functional-843867 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image rm kicbase/echo-server:functional-843867 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-843867
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 image save --daemon kicbase/echo-server:functional-843867 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-843867
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (30.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdany-port1122397033/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765970701826634101" to /tmp/TestFunctionalparallelMountCmdany-port1122397033/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765970701826634101" to /tmp/TestFunctionalparallelMountCmdany-port1122397033/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765970701826634101" to /tmp/TestFunctionalparallelMountCmdany-port1122397033/001/test-1765970701826634101
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.612503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:25:02.041598 1349907 retry.go:31] will retry after 420.218711ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 11:25 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 11:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 11:25 test-1765970701826634101
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh cat /mount-9p/test-1765970701826634101
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-843867 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [f9a5cc82-f34b-48fc-a86d-20f5e29d5982] Pending
helpers_test.go:353: "busybox-mount" [f9a5cc82-f34b-48fc-a86d-20f5e29d5982] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [f9a5cc82-f34b-48fc-a86d-20f5e29d5982] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [f9a5cc82-f34b-48fc-a86d-20f5e29d5982] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 28.003918226s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-843867 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdany-port1122397033/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (30.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 service list -o json
functional_test.go:1504: Took "845.616508ms" to run "out/minikube-linux-amd64 -p functional-843867 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.235:32171
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.235:32171
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdspecific-port264770720/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (185.340768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:25:32.106743 1349907 retry.go:31] will retry after 471.810229ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdspecific-port264770720/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 ssh "sudo umount -f /mount-9p": exit status 1 (172.221938ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-843867 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdspecific-port264770720/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3828492060/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3828492060/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3828492060/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T" /mount1: exit status 1 (219.576639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:25:33.538031 1349907 retry.go:31] will retry after 686.405621ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-843867 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-843867 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3828492060/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3828492060/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-843867 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3828492060/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.43s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-843867
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-843867
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-843867
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21808-1345916/.minikube/files/etc/test/nested/copy/1349907/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (72.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604622 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-604622 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m12.499476962s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (72.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (36.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1217 11:27:00.559878 1349907 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604622 --alsologtostderr -v=8
E1217 11:27:27.381458 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-604622 --alsologtostderr -v=8: (36.662645028s)
functional_test.go:678: soft start took 36.663101136s for "functional-604622" cluster.
I1217 11:27:37.223010 1349907 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (36.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-604622 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-604622 cache add registry.k8s.io/pause:3.1: (1.332230499s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-604622 cache add registry.k8s.io/pause:3.3: (1.08904999s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-604622 cache add registry.k8s.io/pause:latest: (1.11819463s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC2793639330/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 cache add minikube-local-cache-test:functional-604622
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-604622 cache add minikube-local-cache-test:functional-604622: (1.940560575s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 cache delete minikube-local-cache-test:functional-604622
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-604622
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (179.441092ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 kubectl -- --context functional-604622 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-604622 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (99.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604622 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 11:27:55.088682 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-604622 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m39.854375909s)
functional_test.go:776: restart took 1m39.854560006s for "functional-604622" cluster.
I1217 11:29:25.227850 1349907 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (99.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-604622 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-604622 logs: (1.295458426s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi567526323/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-604622 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi567526323/001/logs.txt: (1.302514253s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-604622 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-604622
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-604622: exit status 115 (239.962746ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.74:30811 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-604622 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 config get cpus: exit status 14 (74.084651ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 config get cpus: exit status 14 (72.526717ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (15.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-604622 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-604622 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 1359018: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (15.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604622 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-604622 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (119.007191ms)

                                                
                                                
-- stdout --
	* [functional-604622] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:29:34.292562 1358795 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:29:34.292825 1358795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:29:34.292835 1358795 out.go:374] Setting ErrFile to fd 2...
	I1217 11:29:34.292839 1358795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:29:34.293032 1358795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:29:34.293464 1358795 out.go:368] Setting JSON to false
	I1217 11:29:34.294428 1358795 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18713,"bootTime":1765952261,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:29:34.294487 1358795 start.go:143] virtualization: kvm guest
	I1217 11:29:34.296517 1358795 out.go:179] * [functional-604622] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 11:29:34.297875 1358795 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:29:34.297884 1358795 notify.go:221] Checking for updates...
	I1217 11:29:34.299267 1358795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:29:34.300693 1358795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 11:29:34.302080 1358795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:29:34.303490 1358795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:29:34.304689 1358795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:29:34.306608 1358795 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:29:34.307485 1358795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:29:34.340729 1358795 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 11:29:34.341910 1358795 start.go:309] selected driver: kvm2
	I1217 11:29:34.341929 1358795 start.go:927] validating driver "kvm2" against &{Name:functional-604622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-604622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:29:34.342095 1358795 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:29:34.344646 1358795 out.go:203] 
	W1217 11:29:34.345854 1358795 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 11:29:34.347229 1358795 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604622 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-604622 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-604622 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (131.164659ms)

                                                
                                                
-- stdout --
	* [functional-604622] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:29:34.556262 1358828 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:29:34.556366 1358828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:29:34.556373 1358828 out.go:374] Setting ErrFile to fd 2...
	I1217 11:29:34.556379 1358828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:29:34.556688 1358828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:29:34.557174 1358828 out.go:368] Setting JSON to false
	I1217 11:29:34.558085 1358828 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18714,"bootTime":1765952261,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 11:29:34.558159 1358828 start.go:143] virtualization: kvm guest
	I1217 11:29:34.559728 1358828 out.go:179] * [functional-604622] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 11:29:34.561557 1358828 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 11:29:34.561610 1358828 notify.go:221] Checking for updates...
	I1217 11:29:34.564334 1358828 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 11:29:34.565525 1358828 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 11:29:34.566624 1358828 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 11:29:34.567743 1358828 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 11:29:34.569050 1358828 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 11:29:34.570733 1358828 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 11:29:34.571453 1358828 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 11:29:34.605964 1358828 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 11:29:34.607227 1358828 start.go:309] selected driver: kvm2
	I1217 11:29:34.607248 1358828 start.go:927] validating driver "kvm2" against &{Name:functional-604622 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-604622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.74 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 11:29:34.607349 1358828 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 11:29:34.609479 1358828 out.go:203] 
	W1217 11:29:34.610774 1358828 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 11:29:34.612159 1358828 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (26.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-604622 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-604622 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-rfqvz" [e30d1d6a-77d9-4c54-8c4d-78b3eee07ea8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-rfqvz" [e30d1d6a-77d9-4c54-8c4d-78b3eee07ea8] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.01288924s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.74:30943
functional_test.go:1680: http://192.168.39.74:30943: success! body:
Request served by hello-node-connect-9f67c86d4-rfqvz

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.74:30943
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
E1217 11:30:13.476437 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (26.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (48.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [c5c0eb76-9830-4888-8c6a-812cad323a65] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005862278s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-604622 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-604622 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-604622 get pvc myclaim -o=json
I1217 11:29:45.804954 1349907 retry.go:31] will retry after 2.646972458s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:d567912a-f213-4bbd-8955-0bfdbe7f19fd ResourceVersion:808 Generation:0 CreationTimestamp:2025-12-17 11:29:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00080b300 VolumeMode:0xc00080b310 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-604622 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-604622 apply -f testdata/storage-provisioner/pod.yaml
I1217 11:29:49.079401 1349907 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [c85399cb-fdbe-46b7-a1d5-0c8548095962] Pending
helpers_test.go:353: "sp-pod" [c85399cb-fdbe-46b7-a1d5-0c8548095962] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/12/17 11:29:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "sp-pod" [c85399cb-fdbe-46b7-a1d5-0c8548095962] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 31.00454104s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-604622 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-604622 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-604622 apply -f testdata/storage-provisioner/pod.yaml
I1217 11:30:21.027850 1349907 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [891eacdb-fda4-4abd-84e1-c1228dde56d6] Pending
helpers_test.go:353: "sp-pod" [891eacdb-fda4-4abd-84e1-c1228dde56d6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [891eacdb-fda4-4abd-84e1-c1228dde56d6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004518972s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-604622 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (48.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh -n functional-604622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 cp functional-604622:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm320664909/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh -n functional-604622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh -n functional-604622 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (37.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-604622 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-chpqg" [8bfef443-af05-409e-9978-fc8e82bdc7a8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-chpqg" [8bfef443-af05-409e-9978-fc8e82bdc7a8] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 27.096062538s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604622 exec mysql-7d7b65bc95-chpqg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-604622 exec mysql-7d7b65bc95-chpqg -- mysql -ppassword -e "show databases;": exit status 1 (191.619191ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:30:11.372228 1349907 retry.go:31] will retry after 600.874196ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604622 exec mysql-7d7b65bc95-chpqg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-604622 exec mysql-7d7b65bc95-chpqg -- mysql -ppassword -e "show databases;": exit status 1 (303.146841ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:30:12.277515 1349907 retry.go:31] will retry after 1.469693193s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604622 exec mysql-7d7b65bc95-chpqg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-604622 exec mysql-7d7b65bc95-chpqg -- mysql -ppassword -e "show databases;": exit status 1 (195.029291ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:30:13.943337 1349907 retry.go:31] will retry after 3.158133211s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604622 exec mysql-7d7b65bc95-chpqg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-604622 exec mysql-7d7b65bc95-chpqg -- mysql -ppassword -e "show databases;": exit status 1 (158.41515ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 11:30:17.261107 1349907 retry.go:31] will retry after 4.097155455s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-604622 exec mysql-7d7b65bc95-chpqg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (37.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1349907/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo cat /etc/test/nested/copy/1349907/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1349907.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo cat /etc/ssl/certs/1349907.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1349907.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo cat /usr/share/ca-certificates/1349907.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/13499072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo cat /etc/ssl/certs/13499072.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/13499072.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo cat /usr/share/ca-certificates/13499072.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-604622 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 ssh "sudo systemctl is-active docker": exit status 1 (208.889024ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 ssh "sudo systemctl is-active containerd": exit status 1 (185.181295ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-604622 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-604622 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-mccjw" [7b03295a-3272-49e0-8f8a-98002d59947f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-mccjw" [7b03295a-3272-49e0-8f8a-98002d59947f] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004241452s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604622 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-604622
localhost/kicbase/echo-server:functional-604622
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604622 image ls --format short --alsologtostderr:
I1217 11:29:51.934626 1359624 out.go:360] Setting OutFile to fd 1 ...
I1217 11:29:51.934959 1359624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:29:51.934976 1359624 out.go:374] Setting ErrFile to fd 2...
I1217 11:29:51.934995 1359624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:29:51.935306 1359624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:29:51.936208 1359624 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:29:51.936352 1359624 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:29:51.938918 1359624 ssh_runner.go:195] Run: systemctl --version
I1217 11:29:51.941463 1359624 main.go:143] libmachine: domain functional-604622 has defined MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:29:51.941965 1359624 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:39:a0", ip: ""} in network mk-functional-604622: {Iface:virbr1 ExpiryTime:2025-12-17 12:26:02 +0000 UTC Type:0 Mac:52:54:00:7b:39:a0 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:functional-604622 Clientid:01:52:54:00:7b:39:a0}
I1217 11:29:51.942023 1359624 main.go:143] libmachine: domain functional-604622 has defined IP address 192.168.39.74 and MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:29:51.942183 1359624 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-604622/id_rsa Username:docker}
I1217 11:29:52.029970 1359624 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604622 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1       │ 5032a56602e1b │ 76.9MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/minikube-local-cache-test     │ functional-604622  │ 432fc0b81a421 │ 3.33kB │
│ localhost/my-image                      │ functional-604622  │ a64632a1c8a1e │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.6-0            │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1       │ 73f80cdc073da │ 52.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-604622  │ 9056ab77afb8e │ 4.95MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1       │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1       │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604622 image ls --format table --alsologtostderr:
I1217 11:30:06.497489 1359757 out.go:360] Setting OutFile to fd 1 ...
I1217 11:30:06.497750 1359757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:30:06.497760 1359757 out.go:374] Setting ErrFile to fd 2...
I1217 11:30:06.497764 1359757 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:30:06.498004 1359757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:30:06.498767 1359757 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:30:06.498889 1359757 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:30:06.501578 1359757 ssh_runner.go:195] Run: systemctl --version
I1217 11:30:06.504705 1359757 main.go:143] libmachine: domain functional-604622 has defined MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:30:06.505253 1359757 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:39:a0", ip: ""} in network mk-functional-604622: {Iface:virbr1 ExpiryTime:2025-12-17 12:26:02 +0000 UTC Type:0 Mac:52:54:00:7b:39:a0 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:functional-604622 Clientid:01:52:54:00:7b:39:a0}
I1217 11:30:06.505293 1359757 main.go:143] libmachine: domain functional-604622 has defined IP address 192.168.39.74 and MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:30:06.505490 1359757 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-604622/id_rsa Username:docker}
I1217 11:30:06.601838 1359757 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604622 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"f2c4af65f9b63ebd65c058ec24bddc9834161499e72fdea050d503a29e95351a","repoDigests":["docker.io/library/ec0372c84f1844fef6587fec5457a922ac18a7d16bf68853794b144694828986-tmp@sha256:d94c50045151dc0baf38025713a63d4d0a39dfb284b7dc16e666703ca8269f40"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256
:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size
":"803724943"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d2
9f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"432fc0b81a42161b8f60c7a568429559186659dcd8c457f64ecfacf31fa5f73d","repoDigests":["localhost/minikube-local-cache-test@sha256:d3102da41a766cb7b686292643b622e8d05c78cd20246eca6d88b20da173d105"],"repoTags":["localhost/minikube-local-cache-test:functional-604622"],"size":"3328"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],
"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size
":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-604622"],"size":"4945246"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b4610899
69449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a64632a1c8a1e299d59f9b29bd4b420888041a9d3cf0bb9443d0296da4a65636","repoDigests":["localhost/my-image@sha256:6d9688c4d17f
b8f2fbc92fe430dc077fa3569626078ebcbaa495688152dee457"],"repoTags":["localhost/my-image:functional-604622"],"size":"1468600"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604622 image ls --format json --alsologtostderr:
I1217 11:30:06.275101 1359747 out.go:360] Setting OutFile to fd 1 ...
I1217 11:30:06.275275 1359747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:30:06.275287 1359747 out.go:374] Setting ErrFile to fd 2...
I1217 11:30:06.275294 1359747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:30:06.275554 1359747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:30:06.276420 1359747 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:30:06.276576 1359747 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:30:06.279555 1359747 ssh_runner.go:195] Run: systemctl --version
I1217 11:30:06.282565 1359747 main.go:143] libmachine: domain functional-604622 has defined MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:30:06.283345 1359747 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:39:a0", ip: ""} in network mk-functional-604622: {Iface:virbr1 ExpiryTime:2025-12-17 12:26:02 +0000 UTC Type:0 Mac:52:54:00:7b:39:a0 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:functional-604622 Clientid:01:52:54:00:7b:39:a0}
I1217 11:30:06.283390 1359747 main.go:143] libmachine: domain functional-604622 has defined IP address 192.168.39.74 and MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:30:06.283568 1359747 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-604622/id_rsa Username:docker}
I1217 11:30:06.372233 1359747 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604622 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 432fc0b81a42161b8f60c7a568429559186659dcd8c457f64ecfacf31fa5f73d
repoDigests:
- localhost/minikube-local-cache-test@sha256:d3102da41a766cb7b686292643b622e8d05c78cd20246eca6d88b20da173d105
repoTags:
- localhost/minikube-local-cache-test:functional-604622
size: "3328"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-604622
size: "4945246"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604622 image ls --format yaml --alsologtostderr:
I1217 11:29:52.160380 1359635 out.go:360] Setting OutFile to fd 1 ...
I1217 11:29:52.160703 1359635 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:29:52.160714 1359635 out.go:374] Setting ErrFile to fd 2...
I1217 11:29:52.160721 1359635 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:29:52.161076 1359635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:29:52.161927 1359635 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:29:52.162105 1359635 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:29:52.164947 1359635 ssh_runner.go:195] Run: systemctl --version
I1217 11:29:52.167685 1359635 main.go:143] libmachine: domain functional-604622 has defined MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:29:52.168185 1359635 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:39:a0", ip: ""} in network mk-functional-604622: {Iface:virbr1 ExpiryTime:2025-12-17 12:26:02 +0000 UTC Type:0 Mac:52:54:00:7b:39:a0 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:functional-604622 Clientid:01:52:54:00:7b:39:a0}
I1217 11:29:52.168233 1359635 main.go:143] libmachine: domain functional-604622 has defined IP address 192.168.39.74 and MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:29:52.168418 1359635 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-604622/id_rsa Username:docker}
I1217 11:29:52.276389 1359635 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (13.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 ssh pgrep buildkitd: exit status 1 (197.144166ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image build -t localhost/my-image:functional-604622 testdata/build --alsologtostderr
E1217 11:29:52.979911 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:52.986378 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:52.997929 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:53.019569 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:53.061070 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:53.142778 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:53.305045 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:53.626835 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:54.268366 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:55.550118 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:29:58.112118 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:30:03.234089 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-604622 image build -t localhost/my-image:functional-604622 testdata/build --alsologtostderr: (13.36099061s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-604622 image build -t localhost/my-image:functional-604622 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f2c4af65f9b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-604622
--> a64632a1c8a
Successfully tagged localhost/my-image:functional-604622
a64632a1c8a1e299d59f9b29bd4b420888041a9d3cf0bb9443d0296da4a65636
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-604622 image build -t localhost/my-image:functional-604622 testdata/build --alsologtostderr:
I1217 11:29:52.709848 1359656 out.go:360] Setting OutFile to fd 1 ...
I1217 11:29:52.710188 1359656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:29:52.710199 1359656 out.go:374] Setting ErrFile to fd 2...
I1217 11:29:52.710204 1359656 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 11:29:52.710447 1359656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
I1217 11:29:52.711079 1359656 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:29:52.711809 1359656 config.go:182] Loaded profile config "functional-604622": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 11:29:52.714810 1359656 ssh_runner.go:195] Run: systemctl --version
I1217 11:29:52.717941 1359656 main.go:143] libmachine: domain functional-604622 has defined MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:29:52.718477 1359656 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:39:a0", ip: ""} in network mk-functional-604622: {Iface:virbr1 ExpiryTime:2025-12-17 12:26:02 +0000 UTC Type:0 Mac:52:54:00:7b:39:a0 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:functional-604622 Clientid:01:52:54:00:7b:39:a0}
I1217 11:29:52.718520 1359656 main.go:143] libmachine: domain functional-604622 has defined IP address 192.168.39.74 and MAC address 52:54:00:7b:39:a0 in network mk-functional-604622
I1217 11:29:52.718713 1359656 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/functional-604622/id_rsa Username:docker}
I1217 11:29:52.830874 1359656 build_images.go:162] Building image from path: /tmp/build.1116597702.tar
I1217 11:29:52.831008 1359656 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 11:29:52.861750 1359656 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1116597702.tar
I1217 11:29:52.875823 1359656 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1116597702.tar: stat -c "%s %y" /var/lib/minikube/build/build.1116597702.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1116597702.tar': No such file or directory
I1217 11:29:52.875902 1359656 ssh_runner.go:362] scp /tmp/build.1116597702.tar --> /var/lib/minikube/build/build.1116597702.tar (3072 bytes)
I1217 11:29:52.956661 1359656 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1116597702
I1217 11:29:52.978635 1359656 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1116597702 -xf /var/lib/minikube/build/build.1116597702.tar
I1217 11:29:53.003145 1359656 crio.go:315] Building image: /var/lib/minikube/build/build.1116597702
I1217 11:29:53.003241 1359656 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-604622 /var/lib/minikube/build/build.1116597702 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 11:30:05.963051 1359656 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-604622 /var/lib/minikube/build/build.1116597702 --cgroup-manager=cgroupfs: (12.959778965s)
I1217 11:30:05.963131 1359656 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1116597702
I1217 11:30:05.978435 1359656 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1116597702.tar
I1217 11:30:05.990339 1359656 build_images.go:218] Built localhost/my-image:functional-604622 from /tmp/build.1116597702.tar
I1217 11:30:05.990381 1359656 build_images.go:134] succeeded building to: functional-604622
I1217 11:30:05.990387 1359656 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (13.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-604622
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.93s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "277.24203ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "80.576043ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (9.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun234889999/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765970973630434691" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun234889999/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765970973630434691" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun234889999/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765970973630434691" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun234889999/001/test-1765970973630434691
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (179.353537ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:29:33.810195 1349907 retry.go:31] will retry after 673.052757ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 11:29 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 11:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 11:29 test-1765970973630434691
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh cat /mount-9p/test-1765970973630434691
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-604622 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [fa9d515e-a5ae-472f-a104-2f6340654948] Pending
helpers_test.go:353: "busybox-mount" [fa9d515e-a5ae-472f-a104-2f6340654948] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [fa9d515e-a5ae-472f-a104-2f6340654948] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [fa9d515e-a5ae-472f-a104-2f6340654948] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004717574s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-604622 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun234889999/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (9.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image load --daemon kicbase/echo-server:functional-604622 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-604622 image load --daemon kicbase/echo-server:functional-604622 --alsologtostderr: (1.307521421s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "254.646403ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "72.36438ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image load --daemon kicbase/echo-server:functional-604622 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-604622
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image load --daemon kicbase/echo-server:functional-604622 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image save kicbase/echo-server:functional-604622 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image rm kicbase/echo-server:functional-604622 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-604622
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 image save --daemon kicbase/echo-server:functional-604622 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-604622
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 service list -o json
functional_test.go:1504: Took "448.515464ms" to run "out/minikube-linux-amd64 -p functional-604622 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun770455138/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (178.972398ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:29:43.202112 1349907 retry.go:31] will retry after 664.110244ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun770455138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 ssh "sudo umount -f /mount-9p": exit status 1 (193.442607ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-604622 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun770455138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.74:30299
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.74:30299
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4015079732/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4015079732/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4015079732/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T" /mount1: exit status 1 (172.666271ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 11:29:44.804480 1349907 retry.go:31] will retry after 409.443325ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-604622 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-604622 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4015079732/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4015079732/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-604622 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4015079732/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-604622
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-604622
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-604622
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (230.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1217 11:30:33.958024 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:31:14.920154 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:32:27.381679 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:32:36.841873 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m49.897396022s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (230.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 kubectl -- rollout status deployment/busybox: (4.472853575s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-dc7vp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-fhmbb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-tvz8r -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-dc7vp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-fhmbb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-tvz8r -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-dc7vp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-fhmbb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-tvz8r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-dc7vp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-dc7vp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-fhmbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-fhmbb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-tvz8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 kubectl -- exec busybox-7b57f96db7-tvz8r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 node add --alsologtostderr -v 5
E1217 11:34:32.191024 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:32.197496 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:32.208999 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:32.230536 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:32.272102 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:32.353678 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:32.515361 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:32.836904 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:33.479176 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:34.760975 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:37.322756 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:42.445129 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:52.686978 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:34:52.979784 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:35:13.168758 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 node add --alsologtostderr -v 5: (48.150441529s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-263112 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp testdata/cp-test.txt ha-263112:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2513097718/001/cp-test_ha-263112.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112:/home/docker/cp-test.txt ha-263112-m02:/home/docker/cp-test_ha-263112_ha-263112-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m02 "sudo cat /home/docker/cp-test_ha-263112_ha-263112-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112:/home/docker/cp-test.txt ha-263112-m03:/home/docker/cp-test_ha-263112_ha-263112-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112 "sudo cat /home/docker/cp-test.txt"
E1217 11:35:20.684135 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m03 "sudo cat /home/docker/cp-test_ha-263112_ha-263112-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112:/home/docker/cp-test.txt ha-263112-m04:/home/docker/cp-test_ha-263112_ha-263112-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m04 "sudo cat /home/docker/cp-test_ha-263112_ha-263112-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp testdata/cp-test.txt ha-263112-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2513097718/001/cp-test_ha-263112-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m02:/home/docker/cp-test.txt ha-263112:/home/docker/cp-test_ha-263112-m02_ha-263112.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112 "sudo cat /home/docker/cp-test_ha-263112-m02_ha-263112.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m02:/home/docker/cp-test.txt ha-263112-m03:/home/docker/cp-test_ha-263112-m02_ha-263112-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m03 "sudo cat /home/docker/cp-test_ha-263112-m02_ha-263112-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m02:/home/docker/cp-test.txt ha-263112-m04:/home/docker/cp-test_ha-263112-m02_ha-263112-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m04 "sudo cat /home/docker/cp-test_ha-263112-m02_ha-263112-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp testdata/cp-test.txt ha-263112-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2513097718/001/cp-test_ha-263112-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m03:/home/docker/cp-test.txt ha-263112:/home/docker/cp-test_ha-263112-m03_ha-263112.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112 "sudo cat /home/docker/cp-test_ha-263112-m03_ha-263112.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m03:/home/docker/cp-test.txt ha-263112-m02:/home/docker/cp-test_ha-263112-m03_ha-263112-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m02 "sudo cat /home/docker/cp-test_ha-263112-m03_ha-263112-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m03:/home/docker/cp-test.txt ha-263112-m04:/home/docker/cp-test_ha-263112-m03_ha-263112-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m04 "sudo cat /home/docker/cp-test_ha-263112-m03_ha-263112-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp testdata/cp-test.txt ha-263112-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2513097718/001/cp-test_ha-263112-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m04:/home/docker/cp-test.txt ha-263112:/home/docker/cp-test_ha-263112-m04_ha-263112.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112 "sudo cat /home/docker/cp-test_ha-263112-m04_ha-263112.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m04:/home/docker/cp-test.txt ha-263112-m02:/home/docker/cp-test_ha-263112-m04_ha-263112-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m02 "sudo cat /home/docker/cp-test_ha-263112-m04_ha-263112-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 cp ha-263112-m04:/home/docker/cp-test.txt ha-263112-m03:/home/docker/cp-test_ha-263112-m04_ha-263112-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 ssh -n ha-263112-m03 "sudo cat /home/docker/cp-test_ha-263112-m04_ha-263112-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (84.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 node stop m02 --alsologtostderr -v 5
E1217 11:35:54.130614 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 node stop m02 --alsologtostderr -v 5: (1m24.181195031s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5: exit status 7 (479.212513ms)

                                                
                                                
-- stdout --
	ha-263112
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-263112-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-263112-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-263112-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:36:53.531078 1363010 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:36:53.531370 1363010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:36:53.531381 1363010 out.go:374] Setting ErrFile to fd 2...
	I1217 11:36:53.531386 1363010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:36:53.531611 1363010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:36:53.531780 1363010 out.go:368] Setting JSON to false
	I1217 11:36:53.531816 1363010 mustload.go:66] Loading cluster: ha-263112
	I1217 11:36:53.531874 1363010 notify.go:221] Checking for updates...
	I1217 11:36:53.532424 1363010 config.go:182] Loaded profile config "ha-263112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:36:53.532448 1363010 status.go:174] checking status of ha-263112 ...
	I1217 11:36:53.534825 1363010 status.go:371] ha-263112 host status = "Running" (err=<nil>)
	I1217 11:36:53.534846 1363010 host.go:66] Checking if "ha-263112" exists ...
	I1217 11:36:53.537932 1363010 main.go:143] libmachine: domain ha-263112 has defined MAC address 52:54:00:57:07:f1 in network mk-ha-263112
	I1217 11:36:53.538605 1363010 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:57:07:f1", ip: ""} in network mk-ha-263112: {Iface:virbr1 ExpiryTime:2025-12-17 12:30:44 +0000 UTC Type:0 Mac:52:54:00:57:07:f1 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-263112 Clientid:01:52:54:00:57:07:f1}
	I1217 11:36:53.538659 1363010 main.go:143] libmachine: domain ha-263112 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:07:f1 in network mk-ha-263112
	I1217 11:36:53.538892 1363010 host.go:66] Checking if "ha-263112" exists ...
	I1217 11:36:53.539163 1363010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:36:53.541612 1363010 main.go:143] libmachine: domain ha-263112 has defined MAC address 52:54:00:57:07:f1 in network mk-ha-263112
	I1217 11:36:53.542116 1363010 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:57:07:f1", ip: ""} in network mk-ha-263112: {Iface:virbr1 ExpiryTime:2025-12-17 12:30:44 +0000 UTC Type:0 Mac:52:54:00:57:07:f1 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-263112 Clientid:01:52:54:00:57:07:f1}
	I1217 11:36:53.542141 1363010 main.go:143] libmachine: domain ha-263112 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:07:f1 in network mk-ha-263112
	I1217 11:36:53.542312 1363010 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/ha-263112/id_rsa Username:docker}
	I1217 11:36:53.622351 1363010 ssh_runner.go:195] Run: systemctl --version
	I1217 11:36:53.628725 1363010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:36:53.648875 1363010 kubeconfig.go:125] found "ha-263112" server: "https://192.168.39.254:8443"
	I1217 11:36:53.648923 1363010 api_server.go:166] Checking apiserver status ...
	I1217 11:36:53.648967 1363010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:36:53.667965 1363010 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup
	W1217 11:36:53.679194 1363010 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:36:53.679271 1363010 ssh_runner.go:195] Run: ls
	I1217 11:36:53.684239 1363010 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 11:36:53.689660 1363010 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 11:36:53.689687 1363010 status.go:463] ha-263112 apiserver status = Running (err=<nil>)
	I1217 11:36:53.689701 1363010 status.go:176] ha-263112 status: &{Name:ha-263112 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:36:53.689729 1363010 status.go:174] checking status of ha-263112-m02 ...
	I1217 11:36:53.691522 1363010 status.go:371] ha-263112-m02 host status = "Stopped" (err=<nil>)
	I1217 11:36:53.691538 1363010 status.go:384] host is not running, skipping remaining checks
	I1217 11:36:53.691544 1363010 status.go:176] ha-263112-m02 status: &{Name:ha-263112-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:36:53.691558 1363010 status.go:174] checking status of ha-263112-m03 ...
	I1217 11:36:53.692747 1363010 status.go:371] ha-263112-m03 host status = "Running" (err=<nil>)
	I1217 11:36:53.692762 1363010 host.go:66] Checking if "ha-263112-m03" exists ...
	I1217 11:36:53.695135 1363010 main.go:143] libmachine: domain ha-263112-m03 has defined MAC address 52:54:00:70:8a:80 in network mk-ha-263112
	I1217 11:36:53.695562 1363010 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:8a:80", ip: ""} in network mk-ha-263112: {Iface:virbr1 ExpiryTime:2025-12-17 12:32:45 +0000 UTC Type:0 Mac:52:54:00:70:8a:80 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-263112-m03 Clientid:01:52:54:00:70:8a:80}
	I1217 11:36:53.695583 1363010 main.go:143] libmachine: domain ha-263112-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:70:8a:80 in network mk-ha-263112
	I1217 11:36:53.695692 1363010 host.go:66] Checking if "ha-263112-m03" exists ...
	I1217 11:36:53.695881 1363010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:36:53.697935 1363010 main.go:143] libmachine: domain ha-263112-m03 has defined MAC address 52:54:00:70:8a:80 in network mk-ha-263112
	I1217 11:36:53.698330 1363010 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:8a:80", ip: ""} in network mk-ha-263112: {Iface:virbr1 ExpiryTime:2025-12-17 12:32:45 +0000 UTC Type:0 Mac:52:54:00:70:8a:80 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:ha-263112-m03 Clientid:01:52:54:00:70:8a:80}
	I1217 11:36:53.698350 1363010 main.go:143] libmachine: domain ha-263112-m03 has defined IP address 192.168.39.78 and MAC address 52:54:00:70:8a:80 in network mk-ha-263112
	I1217 11:36:53.698484 1363010 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/ha-263112-m03/id_rsa Username:docker}
	I1217 11:36:53.784343 1363010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:36:53.801344 1363010 kubeconfig.go:125] found "ha-263112" server: "https://192.168.39.254:8443"
	I1217 11:36:53.801379 1363010 api_server.go:166] Checking apiserver status ...
	I1217 11:36:53.801427 1363010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:36:53.820364 1363010 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1841/cgroup
	W1217 11:36:53.831557 1363010 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1841/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:36:53.831617 1363010 ssh_runner.go:195] Run: ls
	I1217 11:36:53.836635 1363010 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 11:36:53.841580 1363010 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 11:36:53.841607 1363010 status.go:463] ha-263112-m03 apiserver status = Running (err=<nil>)
	I1217 11:36:53.841617 1363010 status.go:176] ha-263112-m03 status: &{Name:ha-263112-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:36:53.841634 1363010 status.go:174] checking status of ha-263112-m04 ...
	I1217 11:36:53.843355 1363010 status.go:371] ha-263112-m04 host status = "Running" (err=<nil>)
	I1217 11:36:53.843374 1363010 host.go:66] Checking if "ha-263112-m04" exists ...
	I1217 11:36:53.845914 1363010 main.go:143] libmachine: domain ha-263112-m04 has defined MAC address 52:54:00:57:78:7d in network mk-ha-263112
	I1217 11:36:53.846387 1363010 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:57:78:7d", ip: ""} in network mk-ha-263112: {Iface:virbr1 ExpiryTime:2025-12-17 12:34:45 +0000 UTC Type:0 Mac:52:54:00:57:78:7d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-263112-m04 Clientid:01:52:54:00:57:78:7d}
	I1217 11:36:53.846414 1363010 main.go:143] libmachine: domain ha-263112-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:57:78:7d in network mk-ha-263112
	I1217 11:36:53.846568 1363010 host.go:66] Checking if "ha-263112-m04" exists ...
	I1217 11:36:53.846776 1363010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:36:53.848719 1363010 main.go:143] libmachine: domain ha-263112-m04 has defined MAC address 52:54:00:57:78:7d in network mk-ha-263112
	I1217 11:36:53.849084 1363010 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:57:78:7d", ip: ""} in network mk-ha-263112: {Iface:virbr1 ExpiryTime:2025-12-17 12:34:45 +0000 UTC Type:0 Mac:52:54:00:57:78:7d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-263112-m04 Clientid:01:52:54:00:57:78:7d}
	I1217 11:36:53.849109 1363010 main.go:143] libmachine: domain ha-263112-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:57:78:7d in network mk-ha-263112
	I1217 11:36:53.849254 1363010 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/ha-263112-m04/id_rsa Username:docker}
	I1217 11:36:53.928562 1363010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:36:53.944878 1363010 status.go:176] ha-263112-m04 status: &{Name:ha-263112-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (84.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 node start m02 --alsologtostderr -v 5
E1217 11:37:16.052907 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 node start m02 --alsologtostderr -v 5: (30.791261201s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 stop --alsologtostderr -v 5
E1217 11:37:27.382146 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:38:50.451030 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:39:32.193252 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:39:52.979906 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:39:59.896405 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 stop --alsologtostderr -v 5: (4m5.1085862s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 start --wait true --alsologtostderr -v 5
E1217 11:42:27.381471 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 start --wait true --alsologtostderr -v 5: (1m58.401355886s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (363.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 node delete m03 --alsologtostderr -v 5: (17.242302113s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (248.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 stop --alsologtostderr -v 5
E1217 11:44:32.194358 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:44:52.979911 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:46:16.046597 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:47:27.381918 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 stop --alsologtostderr -v 5: (4m8.097325895s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5: exit status 7 (68.888415ms)

                                                
                                                
-- stdout --
	ha-263112
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-263112-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-263112-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:47:57.260764 1366145 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:47:57.261092 1366145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:47:57.261103 1366145 out.go:374] Setting ErrFile to fd 2...
	I1217 11:47:57.261107 1366145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:47:57.261320 1366145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:47:57.261494 1366145 out.go:368] Setting JSON to false
	I1217 11:47:57.261527 1366145 mustload.go:66] Loading cluster: ha-263112
	I1217 11:47:57.261574 1366145 notify.go:221] Checking for updates...
	I1217 11:47:57.261883 1366145 config.go:182] Loaded profile config "ha-263112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:47:57.261901 1366145 status.go:174] checking status of ha-263112 ...
	I1217 11:47:57.264249 1366145 status.go:371] ha-263112 host status = "Stopped" (err=<nil>)
	I1217 11:47:57.264264 1366145 status.go:384] host is not running, skipping remaining checks
	I1217 11:47:57.264269 1366145 status.go:176] ha-263112 status: &{Name:ha-263112 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:47:57.264284 1366145 status.go:174] checking status of ha-263112-m02 ...
	I1217 11:47:57.265631 1366145 status.go:371] ha-263112-m02 host status = "Stopped" (err=<nil>)
	I1217 11:47:57.265645 1366145 status.go:384] host is not running, skipping remaining checks
	I1217 11:47:57.265651 1366145 status.go:176] ha-263112-m02 status: &{Name:ha-263112-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:47:57.265675 1366145 status.go:174] checking status of ha-263112-m04 ...
	I1217 11:47:57.267168 1366145 status.go:371] ha-263112-m04 host status = "Stopped" (err=<nil>)
	I1217 11:47:57.267181 1366145 status.go:384] host is not running, skipping remaining checks
	I1217 11:47:57.267186 1366145 status.go:176] ha-263112-m04 status: &{Name:ha-263112-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (248.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (96.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1217 11:49:32.190961 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m36.074764716s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (96.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (103.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 node add --control-plane --alsologtostderr -v 5
E1217 11:49:52.980360 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:50:55.258758 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-263112 node add --control-plane --alsologtostderr -v 5: (1m42.922572581s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-263112 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (103.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-670975 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-670975 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (47.169808853s)
--- PASS: TestJSONOutput/start/Command (47.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-670975 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-670975 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-670975 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-670975 --output=json --user=testUser: (6.827205466s)
--- PASS: TestJSONOutput/stop/Command (6.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-061903 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-061903 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.678362ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"14ea4d8a-9fff-4730-bbf4-72b35911af7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-061903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9bf84d9-9b58-42c0-8c03-ac446993b7a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21808"}}
	{"specversion":"1.0","id":"a547c73b-4b6e-4178-a913-f198956853ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5bccd6db-0b14-48ff-827a-52945b9a3dc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig"}}
	{"specversion":"1.0","id":"16239938-21c6-4ab3-b230-8bbe90485a29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube"}}
	{"specversion":"1.0","id":"81f33725-ebd1-44d7-a13a-86d507cc78d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fe8b2ec3-60cb-4739-877e-a954d2803464","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b142bcb9-ee7f-4985-a9b8-d471f19d6941","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-061903" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-061903
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (75.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-585940 --driver=kvm2  --container-runtime=crio
E1217 11:52:27.384772 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-585940 --driver=kvm2  --container-runtime=crio: (35.134137518s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-588327 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-588327 --driver=kvm2  --container-runtime=crio: (37.571775526s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-585940
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-588327
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-588327" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-588327
helpers_test.go:176: Cleaning up "first-585940" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-585940
--- PASS: TestMinikubeProfile (75.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-555781 --memory=3072 --mount-string /tmp/TestMountStartserial4206425291/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-555781 --memory=3072 --mount-string /tmp/TestMountStartserial4206425291/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.334458374s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-555781 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-555781 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-576709 --memory=3072 --mount-string /tmp/TestMountStartserial4206425291/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-576709 --memory=3072 --mount-string /tmp/TestMountStartserial4206425291/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.289350612s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-576709 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-576709 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-555781 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-576709 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-576709 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-576709
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-576709: (1.241128079s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-576709
E1217 11:54:32.194152 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-576709: (17.578145102s)
--- PASS: TestMountStart/serial/RestartStopped (18.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-576709 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-576709 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245791 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1217 11:54:52.980139 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:55:30.453663 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-245791 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.040377734s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-245791 -- rollout status deployment/busybox: (4.363916017s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-7gvt4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-g5vmz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-7gvt4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-g5vmz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-7gvt4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-g5vmz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.02s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-7gvt4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-7gvt4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-g5vmz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-245791 -- exec busybox-7b57f96db7-g5vmz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (45.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-245791 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-245791 -v=5 --alsologtostderr: (44.965363556s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (45.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-245791 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp testdata/cp-test.txt multinode-245791:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp multinode-245791:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2897516968/001/cp-test_multinode-245791.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp multinode-245791:/home/docker/cp-test.txt multinode-245791-m02:/home/docker/cp-test_multinode-245791_multinode-245791-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m02 "sudo cat /home/docker/cp-test_multinode-245791_multinode-245791-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp multinode-245791:/home/docker/cp-test.txt multinode-245791-m03:/home/docker/cp-test_multinode-245791_multinode-245791-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m03 "sudo cat /home/docker/cp-test_multinode-245791_multinode-245791-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp testdata/cp-test.txt multinode-245791-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp multinode-245791-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2897516968/001/cp-test_multinode-245791-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp multinode-245791-m02:/home/docker/cp-test.txt multinode-245791:/home/docker/cp-test_multinode-245791-m02_multinode-245791.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791 "sudo cat /home/docker/cp-test_multinode-245791-m02_multinode-245791.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp multinode-245791-m02:/home/docker/cp-test.txt multinode-245791-m03:/home/docker/cp-test_multinode-245791-m02_multinode-245791-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m03 "sudo cat /home/docker/cp-test_multinode-245791-m02_multinode-245791-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp testdata/cp-test.txt multinode-245791-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp multinode-245791-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2897516968/001/cp-test_multinode-245791-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp multinode-245791-m03:/home/docker/cp-test.txt multinode-245791:/home/docker/cp-test_multinode-245791-m03_multinode-245791.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791 "sudo cat /home/docker/cp-test_multinode-245791-m03_multinode-245791.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 cp multinode-245791-m03:/home/docker/cp-test.txt multinode-245791-m02:/home/docker/cp-test_multinode-245791-m03_multinode-245791-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 ssh -n multinode-245791-m02 "sudo cat /home/docker/cp-test_multinode-245791-m03_multinode-245791-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-245791 node stop m03: (1.628600571s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-245791 status: exit status 7 (316.700349ms)

                                                
                                                
-- stdout --
	multinode-245791
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-245791-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-245791-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-245791 status --alsologtostderr: exit status 7 (318.854544ms)

                                                
                                                
-- stdout --
	multinode-245791
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-245791-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-245791-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 11:57:17.680737 1371692 out.go:360] Setting OutFile to fd 1 ...
	I1217 11:57:17.680972 1371692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:57:17.680999 1371692 out.go:374] Setting ErrFile to fd 2...
	I1217 11:57:17.681005 1371692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 11:57:17.681471 1371692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 11:57:17.681641 1371692 out.go:368] Setting JSON to false
	I1217 11:57:17.681674 1371692 mustload.go:66] Loading cluster: multinode-245791
	I1217 11:57:17.681719 1371692 notify.go:221] Checking for updates...
	I1217 11:57:17.682003 1371692 config.go:182] Loaded profile config "multinode-245791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 11:57:17.682017 1371692 status.go:174] checking status of multinode-245791 ...
	I1217 11:57:17.684538 1371692 status.go:371] multinode-245791 host status = "Running" (err=<nil>)
	I1217 11:57:17.684559 1371692 host.go:66] Checking if "multinode-245791" exists ...
	I1217 11:57:17.687645 1371692 main.go:143] libmachine: domain multinode-245791 has defined MAC address 52:54:00:82:8d:44 in network mk-multinode-245791
	I1217 11:57:17.688163 1371692 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8d:44", ip: ""} in network mk-multinode-245791: {Iface:virbr1 ExpiryTime:2025-12-17 12:54:52 +0000 UTC Type:0 Mac:52:54:00:82:8d:44 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-245791 Clientid:01:52:54:00:82:8d:44}
	I1217 11:57:17.688206 1371692 main.go:143] libmachine: domain multinode-245791 has defined IP address 192.168.39.129 and MAC address 52:54:00:82:8d:44 in network mk-multinode-245791
	I1217 11:57:17.688365 1371692 host.go:66] Checking if "multinode-245791" exists ...
	I1217 11:57:17.688636 1371692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:57:17.691188 1371692 main.go:143] libmachine: domain multinode-245791 has defined MAC address 52:54:00:82:8d:44 in network mk-multinode-245791
	I1217 11:57:17.691605 1371692 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:8d:44", ip: ""} in network mk-multinode-245791: {Iface:virbr1 ExpiryTime:2025-12-17 12:54:52 +0000 UTC Type:0 Mac:52:54:00:82:8d:44 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-245791 Clientid:01:52:54:00:82:8d:44}
	I1217 11:57:17.691628 1371692 main.go:143] libmachine: domain multinode-245791 has defined IP address 192.168.39.129 and MAC address 52:54:00:82:8d:44 in network mk-multinode-245791
	I1217 11:57:17.691802 1371692 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/multinode-245791/id_rsa Username:docker}
	I1217 11:57:17.770857 1371692 ssh_runner.go:195] Run: systemctl --version
	I1217 11:57:17.776773 1371692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:57:17.792564 1371692 kubeconfig.go:125] found "multinode-245791" server: "https://192.168.39.129:8443"
	I1217 11:57:17.792615 1371692 api_server.go:166] Checking apiserver status ...
	I1217 11:57:17.792676 1371692 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 11:57:17.812896 1371692 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	W1217 11:57:17.824596 1371692 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 11:57:17.824647 1371692 ssh_runner.go:195] Run: ls
	I1217 11:57:17.829292 1371692 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I1217 11:57:17.834137 1371692 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I1217 11:57:17.834157 1371692 status.go:463] multinode-245791 apiserver status = Running (err=<nil>)
	I1217 11:57:17.834169 1371692 status.go:176] multinode-245791 status: &{Name:multinode-245791 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:57:17.834188 1371692 status.go:174] checking status of multinode-245791-m02 ...
	I1217 11:57:17.835964 1371692 status.go:371] multinode-245791-m02 host status = "Running" (err=<nil>)
	I1217 11:57:17.836001 1371692 host.go:66] Checking if "multinode-245791-m02" exists ...
	I1217 11:57:17.838814 1371692 main.go:143] libmachine: domain multinode-245791-m02 has defined MAC address 52:54:00:53:c7:3e in network mk-multinode-245791
	I1217 11:57:17.839354 1371692 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:53:c7:3e", ip: ""} in network mk-multinode-245791: {Iface:virbr1 ExpiryTime:2025-12-17 12:55:46 +0000 UTC Type:0 Mac:52:54:00:53:c7:3e Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-245791-m02 Clientid:01:52:54:00:53:c7:3e}
	I1217 11:57:17.839385 1371692 main.go:143] libmachine: domain multinode-245791-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:53:c7:3e in network mk-multinode-245791
	I1217 11:57:17.839550 1371692 host.go:66] Checking if "multinode-245791-m02" exists ...
	I1217 11:57:17.839795 1371692 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 11:57:17.842058 1371692 main.go:143] libmachine: domain multinode-245791-m02 has defined MAC address 52:54:00:53:c7:3e in network mk-multinode-245791
	I1217 11:57:17.842450 1371692 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:53:c7:3e", ip: ""} in network mk-multinode-245791: {Iface:virbr1 ExpiryTime:2025-12-17 12:55:46 +0000 UTC Type:0 Mac:52:54:00:53:c7:3e Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:multinode-245791-m02 Clientid:01:52:54:00:53:c7:3e}
	I1217 11:57:17.842470 1371692 main.go:143] libmachine: domain multinode-245791-m02 has defined IP address 192.168.39.106 and MAC address 52:54:00:53:c7:3e in network mk-multinode-245791
	I1217 11:57:17.842586 1371692 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21808-1345916/.minikube/machines/multinode-245791-m02/id_rsa Username:docker}
	I1217 11:57:17.920106 1371692 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 11:57:17.935618 1371692 status.go:176] multinode-245791-m02 status: &{Name:multinode-245791-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 11:57:17.935653 1371692 status.go:174] checking status of multinode-245791-m03 ...
	I1217 11:57:17.937399 1371692 status.go:371] multinode-245791-m03 host status = "Stopped" (err=<nil>)
	I1217 11:57:17.937421 1371692 status.go:384] host is not running, skipping remaining checks
	I1217 11:57:17.937429 1371692 status.go:176] multinode-245791-m03 status: &{Name:multinode-245791-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 node start m03 -v=5 --alsologtostderr
E1217 11:57:27.381660 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-245791 node start m03 -v=5 --alsologtostderr: (36.410859796s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (292.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-245791
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-245791
E1217 11:59:32.194208 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 11:59:52.980413 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-245791: (2m40.948847352s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245791 --wait=true -v=5 --alsologtostderr
E1217 12:02:27.382267 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-245791 --wait=true -v=5 --alsologtostderr: (2m11.206625201s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-245791
--- PASS: TestMultiNode/serial/RestartKeepsNodes (292.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-245791 node delete m03: (2.087592064s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (165.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 stop
E1217 12:02:56.050363 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:04:32.192157 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:04:52.979759 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-245791 stop: (2m44.986700614s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-245791 status: exit status 7 (68.661483ms)

                                                
                                                
-- stdout --
	multinode-245791
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-245791-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-245791 status --alsologtostderr: exit status 7 (67.223558ms)

                                                
                                                
-- stdout --
	multinode-245791
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-245791-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 12:05:34.840795 1374118 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:05:34.840903 1374118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:05:34.840911 1374118 out.go:374] Setting ErrFile to fd 2...
	I1217 12:05:34.840915 1374118 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:05:34.841097 1374118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 12:05:34.841287 1374118 out.go:368] Setting JSON to false
	I1217 12:05:34.841322 1374118 mustload.go:66] Loading cluster: multinode-245791
	I1217 12:05:34.841464 1374118 notify.go:221] Checking for updates...
	I1217 12:05:34.841704 1374118 config.go:182] Loaded profile config "multinode-245791": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:05:34.841723 1374118 status.go:174] checking status of multinode-245791 ...
	I1217 12:05:34.843962 1374118 status.go:371] multinode-245791 host status = "Stopped" (err=<nil>)
	I1217 12:05:34.843993 1374118 status.go:384] host is not running, skipping remaining checks
	I1217 12:05:34.844001 1374118 status.go:176] multinode-245791 status: &{Name:multinode-245791 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 12:05:34.844023 1374118 status.go:174] checking status of multinode-245791-m02 ...
	I1217 12:05:34.845263 1374118 status.go:371] multinode-245791-m02 host status = "Stopped" (err=<nil>)
	I1217 12:05:34.845276 1374118 status.go:384] host is not running, skipping remaining checks
	I1217 12:05:34.845281 1374118 status.go:176] multinode-245791-m02 status: &{Name:multinode-245791-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (165.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (93.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245791 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-245791 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m32.632791367s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-245791 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (93.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-245791
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245791-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-245791-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (84.492619ms)

                                                
                                                
-- stdout --
	* [multinode-245791-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-245791-m02' is duplicated with machine name 'multinode-245791-m02' in profile 'multinode-245791'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-245791-m03 --driver=kvm2  --container-runtime=crio
E1217 12:07:27.381926 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:07:35.261653 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-245791-m03 --driver=kvm2  --container-runtime=crio: (36.998889473s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-245791
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-245791: exit status 80 (230.894239ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-245791 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-245791-m03 already exists in multinode-245791-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-245791-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.26s)

                                                
                                    
x
+
TestScheduledStopUnix (106.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-249609 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-249609 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.167327502s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-249609 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 12:10:52.239819 1376482 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:10:52.240090 1376482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:10:52.240099 1376482 out.go:374] Setting ErrFile to fd 2...
	I1217 12:10:52.240103 1376482 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:10:52.240293 1376482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 12:10:52.240517 1376482 out.go:368] Setting JSON to false
	I1217 12:10:52.240603 1376482 mustload.go:66] Loading cluster: scheduled-stop-249609
	I1217 12:10:52.240901 1376482 config.go:182] Loaded profile config "scheduled-stop-249609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:10:52.240975 1376482 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/config.json ...
	I1217 12:10:52.241174 1376482 mustload.go:66] Loading cluster: scheduled-stop-249609
	I1217 12:10:52.241286 1376482 config.go:182] Loaded profile config "scheduled-stop-249609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-249609 -n scheduled-stop-249609
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-249609 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 12:10:52.541109 1376528 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:10:52.541212 1376528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:10:52.541217 1376528 out.go:374] Setting ErrFile to fd 2...
	I1217 12:10:52.541221 1376528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:10:52.541430 1376528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 12:10:52.541650 1376528 out.go:368] Setting JSON to false
	I1217 12:10:52.541857 1376528 daemonize_unix.go:73] killing process 1376517 as it is an old scheduled stop
	I1217 12:10:52.541977 1376528 mustload.go:66] Loading cluster: scheduled-stop-249609
	I1217 12:10:52.542344 1376528 config.go:182] Loaded profile config "scheduled-stop-249609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:10:52.542417 1376528 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/config.json ...
	I1217 12:10:52.542602 1376528 mustload.go:66] Loading cluster: scheduled-stop-249609
	I1217 12:10:52.542708 1376528 config.go:182] Loaded profile config "scheduled-stop-249609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 12:10:52.547777 1349907 retry.go:31] will retry after 59.513µs: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.548942 1349907 retry.go:31] will retry after 175.785µs: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.550052 1349907 retry.go:31] will retry after 275.243µs: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.551208 1349907 retry.go:31] will retry after 432.27µs: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.552336 1349907 retry.go:31] will retry after 691.175µs: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.553472 1349907 retry.go:31] will retry after 663.328µs: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.554600 1349907 retry.go:31] will retry after 1.675409ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.556836 1349907 retry.go:31] will retry after 1.470462ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.559042 1349907 retry.go:31] will retry after 2.126636ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.562279 1349907 retry.go:31] will retry after 3.507197ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.566493 1349907 retry.go:31] will retry after 6.924001ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.573736 1349907 retry.go:31] will retry after 12.77903ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.587011 1349907 retry.go:31] will retry after 18.061338ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.605244 1349907 retry.go:31] will retry after 23.515743ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.629585 1349907 retry.go:31] will retry after 17.655597ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
I1217 12:10:52.648057 1349907 retry.go:31] will retry after 51.137334ms: open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-249609 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-249609 -n scheduled-stop-249609
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-249609
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-249609 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 12:11:18.293188 1376677 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:11:18.293451 1376677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:11:18.293460 1376677 out.go:374] Setting ErrFile to fd 2...
	I1217 12:11:18.293464 1376677 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:11:18.293635 1376677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 12:11:18.293872 1376677 out.go:368] Setting JSON to false
	I1217 12:11:18.293953 1376677 mustload.go:66] Loading cluster: scheduled-stop-249609
	I1217 12:11:18.294266 1376677 config.go:182] Loaded profile config "scheduled-stop-249609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:11:18.294330 1376677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/scheduled-stop-249609/config.json ...
	I1217 12:11:18.294512 1376677 mustload.go:66] Loading cluster: scheduled-stop-249609
	I1217 12:11:18.294625 1376677 config.go:182] Loaded profile config "scheduled-stop-249609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-249609
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-249609: exit status 7 (65.964092ms)

                                                
                                                
-- stdout --
	scheduled-stop-249609
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-249609 -n scheduled-stop-249609
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-249609 -n scheduled-stop-249609: exit status 7 (62.8807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-249609" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-249609
--- PASS: TestScheduledStopUnix (106.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (349.89s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.561518254 start -p running-upgrade-616756 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.561518254 start -p running-upgrade-616756 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m1.513360459s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-616756 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-616756 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m43.640801131s)
helpers_test.go:176: Cleaning up "running-upgrade-616756" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-616756
--- PASS: TestRunningBinaryUpgrade (349.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (114.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-730136 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-730136 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.006418916s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-730136
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-730136: (1.883003251s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-730136 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-730136 status --format={{.Host}}: exit status 7 (81.801791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-730136 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-730136 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.701756693s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-730136 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-730136 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-730136 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (83.262082ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-730136] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-730136
	    minikube start -p kubernetes-upgrade-730136 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7301362 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-730136 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-730136 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1217 12:14:52.979915 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-730136 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (15.54031248s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-730136" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-730136
--- PASS: TestKubernetesUpgrade (114.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-379176 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-379176 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (103.558057ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-379176] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (93.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-379176 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1217 12:12:10.455733 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:12:27.381807 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-379176 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m33.57435711s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-379176 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (93.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (49.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-379176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-379176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.40954161s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-379176 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-379176 status -o json: exit status 2 (215.840181ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-379176","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-379176
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (49.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-379176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1217 12:14:32.190326 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-379176 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (29.973535431s)
--- PASS: TestNoKubernetes/serial/Start (29.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.592219578 start -p stopped-upgrade-630475 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.592219578 start -p stopped-upgrade-630475 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m0.31259825s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.592219578 -p stopped-upgrade-630475 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.592219578 -p stopped-upgrade-630475 stop: (1.838290777s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-630475 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-630475 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.064233527s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21808-1345916/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-379176 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-379176 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.518813ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (10.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (9.297540208s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (10.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-379176
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-379176: (1.415197222s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (30.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-379176 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-379176 --driver=kvm2  --container-runtime=crio: (30.391782511s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (30.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-379176 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-379176 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.721096ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-470455 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-470455 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (114.323054ms)

                                                
                                                
-- stdout --
	* [false-470455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 12:15:44.462084 1380981 out.go:360] Setting OutFile to fd 1 ...
	I1217 12:15:44.462198 1380981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:15:44.462207 1380981 out.go:374] Setting ErrFile to fd 2...
	I1217 12:15:44.462210 1380981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 12:15:44.462423 1380981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-1345916/.minikube/bin
	I1217 12:15:44.462926 1380981 out.go:368] Setting JSON to false
	I1217 12:15:44.463910 1380981 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21483,"bootTime":1765952261,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 12:15:44.463969 1380981 start.go:143] virtualization: kvm guest
	I1217 12:15:44.465752 1380981 out.go:179] * [false-470455] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 12:15:44.466978 1380981 notify.go:221] Checking for updates...
	I1217 12:15:44.466999 1380981 out.go:179]   - MINIKUBE_LOCATION=21808
	I1217 12:15:44.468210 1380981 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 12:15:44.469349 1380981 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-1345916/kubeconfig
	I1217 12:15:44.470529 1380981 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-1345916/.minikube
	I1217 12:15:44.471634 1380981 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 12:15:44.472860 1380981 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 12:15:44.474553 1380981 config.go:182] Loaded profile config "cert-expiration-026544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 12:15:44.474670 1380981 config.go:182] Loaded profile config "running-upgrade-616756": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 12:15:44.474765 1380981 config.go:182] Loaded profile config "stopped-upgrade-630475": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 12:15:44.474896 1380981 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 12:15:44.507908 1380981 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 12:15:44.509040 1380981 start.go:309] selected driver: kvm2
	I1217 12:15:44.509055 1380981 start.go:927] validating driver "kvm2" against <nil>
	I1217 12:15:44.509070 1380981 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 12:15:44.510926 1380981 out.go:203] 
	W1217 12:15:44.512092 1380981 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1217 12:15:44.513166 1380981 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-470455 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-470455" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 12:13:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.132:8443
name: cert-expiration-026544
contexts:
- context:
cluster: cert-expiration-026544
extensions:
- extension:
last-update: Wed, 17 Dec 2025 12:13:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-026544
name: cert-expiration-026544
current-context: ""
kind: Config
users:
- name: cert-expiration-026544
user:
client-certificate: /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/cert-expiration-026544/client.crt
client-key: /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/cert-expiration-026544/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-470455

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-470455"

                                                
                                                
----------------------- debugLogs end: false-470455 [took: 3.502137729s] --------------------------------
helpers_test.go:176: Cleaning up "false-470455" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-470455
--- PASS: TestNetworkPlugins/group/false (3.81s)

                                                
                                    
x
+
TestPause/serial/Start (105.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-137189 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-137189 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.989646669s)
--- PASS: TestPause/serial/Start (105.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-630475
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestISOImage/Setup (30.93s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-887598 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-887598 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.933231469s)
--- PASS: TestISOImage/Setup (30.93s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (109.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-757245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1217 12:17:27.382442 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-757245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m49.6990235s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (109.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (101.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-837348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-837348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (1m41.758791707s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (101.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-757245 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [97bb21bf-d122-42b6-980e-995523dbd998] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [97bb21bf-d122-42b6-980e-995523dbd998] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004318115s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-757245 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-539112 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-539112 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m19.477765657s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-757245 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-757245 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.036119s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-757245 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (81.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-757245 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-757245 --alsologtostderr -v=3: (1m21.151616116s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (81.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-837348 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5b21ed08-04a2-4cdc-9571-45a018338dda] Pending
helpers_test.go:353: "busybox" [5b21ed08-04a2-4cdc-9571-45a018338dda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5b21ed08-04a2-4cdc-9571-45a018338dda] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003955136s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-837348 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-837348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-837348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.740676445s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-837348 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (85.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-837348 --alsologtostderr -v=3
E1217 12:19:32.191179 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:19:36.052477 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:19:52.980610 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-837348 --alsologtostderr -v=3: (1m25.355285342s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (85.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-539112 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [68e3c428-ccb0-4b3f-a6a3-597e7482a873] Pending
helpers_test.go:353: "busybox" [68e3c428-ccb0-4b3f-a6a3-597e7482a873] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [68e3c428-ccb0-4b3f-a6a3-597e7482a873] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003289081s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-539112 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-539112 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-539112 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (83.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-539112 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-539112 --alsologtostderr -v=3: (1m23.828800777s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (83.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757245 -n old-k8s-version-757245
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757245 -n old-k8s-version-757245: exit status 7 (79.871246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-757245 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-757245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-757245 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (48.529087177s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-757245 -n old-k8s-version-757245
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837348 -n no-preload-837348
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837348 -n no-preload-837348: exit status 7 (90.867062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-837348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-837348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-837348 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (53.970909274s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-837348 -n no-preload-837348
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-144459 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-144459 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (1m35.637407481s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (95.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-cxg8h" [b35bff6c-ba42-4e38-addb-c497bdb434de] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-cxg8h" [b35bff6c-ba42-4e38-addb-c497bdb434de] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005256429s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-cxg8h" [b35bff6c-ba42-4e38-addb-c497bdb434de] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0052164s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-757245 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-757245 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-757245 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-757245 --alsologtostderr -v=1: (1.167283721s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757245 -n old-k8s-version-757245
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757245 -n old-k8s-version-757245: exit status 2 (256.278428ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-757245 -n old-k8s-version-757245
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-757245 -n old-k8s-version-757245: exit status 2 (256.213202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-757245 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-757245 -n old-k8s-version-757245
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-757245 -n old-k8s-version-757245
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-752465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-752465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (44.60820432s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-spj7r" [4aa8f90e-7d4f-494c-b3dd-4218f41ecba2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-spj7r" [4aa8f90e-7d4f-494c-b3dd-4218f41ecba2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.003970385s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-539112 -n embed-certs-539112
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-539112 -n embed-certs-539112: exit status 7 (95.950588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-539112 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (57.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-539112 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-539112 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (56.761999144s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-539112 -n embed-certs-539112
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (57.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-spj7r" [4aa8f90e-7d4f-494c-b3dd-4218f41ecba2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003991397s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-837348 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-837348 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-837348 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-837348 -n no-preload-837348
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-837348 -n no-preload-837348: exit status 2 (212.57273ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-837348 -n no-preload-837348
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-837348 -n no-preload-837348: exit status 2 (224.438933ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-837348 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-837348 -n no-preload-837348
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-837348 -n no-preload-837348
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m15.713222284s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-752465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-752465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.280978164s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-752465 --alsologtostderr -v=3
E1217 12:22:27.381906 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/addons-410268/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-752465 --alsologtostderr -v=3: (8.550406151s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-752465 -n newest-cni-752465
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-752465 -n newest-cni-752465: exit status 7 (78.796126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-752465 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (44.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-752465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-752465 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (44.18135785s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-752465 -n newest-cni-752465
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (44.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-144459 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0c8e7caa-b896-4914-838a-781b381df751] Pending
helpers_test.go:353: "busybox" [0c8e7caa-b896-4914-838a-781b381df751] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0c8e7caa-b896-4914-838a-781b381df751] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.03717223s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-144459 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-wthn8" [5f373990-8ccc-4210-859f-c8c2fda3cc1f] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-wthn8" [5f373990-8ccc-4210-859f-c8c2fda3cc1f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003793226s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-144459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-144459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.458085673s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-144459 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (83.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-144459 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-144459 --alsologtostderr -v=3: (1m23.124398112s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (83.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-wthn8" [5f373990-8ccc-4210-859f-c8c2fda3cc1f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004425441s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-539112 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-539112 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-539112 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-539112 --alsologtostderr -v=1: (1.312864243s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-539112 -n embed-certs-539112
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-539112 -n embed-certs-539112: exit status 2 (229.942641ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-539112 -n embed-certs-539112
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-539112 -n embed-certs-539112: exit status 2 (237.954034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-539112 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-539112 -n embed-certs-539112
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-539112 -n embed-certs-539112
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.049505023s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-752465 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-752465 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-752465 --alsologtostderr -v=1: (1.01930599s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-752465 -n newest-cni-752465
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-752465 -n newest-cni-752465: exit status 2 (252.839442ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-752465 -n newest-cni-752465
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-752465 -n newest-cni-752465: exit status 2 (253.394847ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-752465 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-752465 -n newest-cni-752465
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-752465 -n newest-cni-752465
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-470455 "pgrep -a kubelet"
I1217 12:23:16.125634 1349907 config.go:182] Loaded profile config "auto-470455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-470455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jjbgr" [4e1c12e0-5862-405b-989d-1456743e28ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-jjbgr" [4e1c12e0-5862-405b-989d-1456743e28ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004129562s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m35.709853027s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-470455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1217 12:23:50.515818 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:50.522291 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:50.533850 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:50.555444 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:50.596952 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:50.678505 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:50.840471 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:51.162240 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:51.804226 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:53.086141 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:23:55.648085 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:00.770504 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m12.755428878s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-144459 -n default-k8s-diff-port-144459
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-144459 -n default-k8s-diff-port-144459: exit status 7 (99.950977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-144459 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-144459 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3
E1217 12:24:10.701129 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:10.708494 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:10.723329 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-144459 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.3: (55.540119485s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-144459 -n default-k8s-diff-port-144459
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
E1217 12:24:10.747902 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kindnet-6mtt5" [b957ec43-329f-4a6f-b1c1-703820e0fd52] Running
E1217 12:24:10.789590 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:10.871191 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:11.012701 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:11.033509 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:11.355366 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:11.997690 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:13.279859 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:15.263915 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-604622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:15.842292 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005114664s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-470455 "pgrep -a kubelet"
I1217 12:24:16.943266 1349907 config.go:182] Loaded profile config "kindnet-470455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-470455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-lwslr" [c93bfb27-0b31-4ce3-ab4a-7402ba38a247] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 12:24:20.963658 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-lwslr" [c93bfb27-0b31-4ce3-ab4a-7402ba38a247] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005999212s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-470455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1217 12:24:51.687874 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 12:24:52.980523 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/functional-843867/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m22.320301068s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-s9jrf" [352158a0-751d-40ab-9d72-224d8409c099] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005300825s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-470455 "pgrep -a kubelet"
I1217 12:24:56.181688 1349907 config.go:182] Loaded profile config "custom-flannel-470455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-470455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-ln6js" [6b34f46f-d61b-469f-a8de-69b1422de2d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-ln6js" [6b34f46f-d61b-469f-a8de-69b1422de2d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004460052s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-470455 "pgrep -a kubelet"
I1217 12:25:01.803396 1349907 config.go:182] Loaded profile config "calico-470455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-470455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-49z44" [af1db7c1-56ef-44d2-ab5d-050fe85f85e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-49z44" [af1db7c1-56ef-44d2-ab5d-050fe85f85e2] Running
E1217 12:25:12.456790 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004910054s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-p2jkv" [3904c157-61e6-4cea-9993-f3aff9c15745] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.146258598s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-470455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-p2jkv" [3904c157-61e6-4cea-9993-f3aff9c15745] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004298306s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-144459 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-470455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-144459 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-144459 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-144459 -n default-k8s-diff-port-144459
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-144459 -n default-k8s-diff-port-144459: exit status 2 (250.578984ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-144459 -n default-k8s-diff-port-144459
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-144459 -n default-k8s-diff-port-144459: exit status 2 (259.985567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-144459 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-144459 -n default-k8s-diff-port-144459
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-144459 -n default-k8s-diff-port-144459
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m4.899808347s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (107.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-470455 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m47.725194042s)
--- PASS: TestNetworkPlugins/group/bridge/Start (107.73s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
E1217 12:25:32.649544 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/no-preload-837348/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.23s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.23s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.20s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.21s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   kicbase_version: v0.0.48-1765661130-22141
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 1d20c337b4b256c51c2d46553500e8ea625f1d01
iso_test.go:118:   iso_version: v1.37.0-1765846775-22141
--- PASS: TestISOImage/VersionJSON (0.21s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-887598 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-470455 "pgrep -a kubelet"
I1217 12:26:06.996016 1349907 config.go:182] Loaded profile config "enable-default-cni-470455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-470455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-vgjzs" [1a4f81e9-7ae2-4ce8-b106-403bb61d4888] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-vgjzs" [1a4f81e9-7ae2-4ce8-b106-403bb61d4888] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003556689s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-470455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-c8cpr" [f89fd504-d6d4-48f8-a1de-b082a015e35e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006701603s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-470455 "pgrep -a kubelet"
I1217 12:26:33.699137 1349907 config.go:182] Loaded profile config "flannel-470455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-470455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7mh7q" [f6c8457f-d19d-4dd2-a744-cdf970f98825] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 12:26:34.378264 1349907 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/old-k8s-version-757245/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-7mh7q" [f6c8457f-d19d-4dd2-a744-cdf970f98825] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003878977s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-470455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-470455 "pgrep -a kubelet"
I1217 12:27:11.162817 1349907 config.go:182] Loaded profile config "bridge-470455": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-470455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-67v2t" [f9819334-7833-476b-83e6-3a51e9d692af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-67v2t" [f9819334-7833-476b-83e6-3a51e9d692af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004288018s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-470455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-470455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.29
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
152 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
153 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
154 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
155 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
157 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
158 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
159 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0.01
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService 0.01
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0.01
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
373 TestStartStop/group/disable-driver-mounts 0.24
377 TestNetworkPlugins/group/kubenet 3.92
385 TestNetworkPlugins/group/cilium 4.01
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-410268 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-875210" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-875210
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-470455 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-470455" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 12:13:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.132:8443
name: cert-expiration-026544
contexts:
- context:
cluster: cert-expiration-026544
extensions:
- extension:
last-update: Wed, 17 Dec 2025 12:13:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-026544
name: cert-expiration-026544
current-context: ""
kind: Config
users:
- name: cert-expiration-026544
user:
client-certificate: /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/cert-expiration-026544/client.crt
client-key: /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/cert-expiration-026544/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-470455

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-470455"

                                                
                                                
----------------------- debugLogs end: kubenet-470455 [took: 3.749381617s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-470455" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-470455
--- SKIP: TestNetworkPlugins/group/kubenet (3.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-470455 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-470455" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-1345916/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 12:13:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.132:8443
name: cert-expiration-026544
contexts:
- context:
cluster: cert-expiration-026544
extensions:
- extension:
last-update: Wed, 17 Dec 2025 12:13:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-026544
name: cert-expiration-026544
current-context: ""
kind: Config
users:
- name: cert-expiration-026544
user:
client-certificate: /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/cert-expiration-026544/client.crt
client-key: /home/jenkins/minikube-integration/21808-1345916/.minikube/profiles/cert-expiration-026544/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-470455

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-470455" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-470455"

                                                
                                                
----------------------- debugLogs end: cilium-470455 [took: 3.848662141s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-470455" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-470455
--- SKIP: TestNetworkPlugins/group/cilium (4.01s)

                                                
                                    
Copied to clipboard