Test Report: KVM_Linux_crio 21997

                    
                      ee66eb73e5650a3c34c21fac75605dac5b258565:2025-12-02:42611
                    
                

Test fail (3/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 163.34
345 TestPreload 113.69
404 TestPause/serial/SecondStartNoReconfiguration 43
x
+
TestAddons/parallel/Ingress (163.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-375150 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-375150 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-375150 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f120ae74-ee28-4d2d-8418-16f78d1e0320] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f120ae74-ee28-4d2d-8418-16f78d1e0320] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.00482925s
I1202 19:48:09.053372  147070 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-375150 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.607387971s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-375150 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.62
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-375150 -n addons-375150
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-375150 logs -n 25: (1.161529637s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-951847                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-951847 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │ 02 Dec 25 19:44 UTC │
	│ start   │ --download-only -p binary-mirror-718788 --alsologtostderr --binary-mirror http://127.0.0.1:37387 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-718788 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │                     │
	│ delete  │ -p binary-mirror-718788                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-718788 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │ 02 Dec 25 19:44 UTC │
	│ addons  │ enable dashboard -p addons-375150                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │                     │
	│ addons  │ disable dashboard -p addons-375150                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │                     │
	│ start   │ -p addons-375150 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ addons-375150 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ addons-375150 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ enable headlamp -p addons-375150 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ addons-375150 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ addons-375150 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ addons-375150 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ addons-375150 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ ip      │ addons-375150 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ addons-375150 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ addons-375150 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ ssh     │ addons-375150 ssh cat /opt/local-path-provisioner/pvc-4b45ee50-01bc-49df-9618-d88b3acdefc4_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:47 UTC │
	│ addons  │ addons-375150 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:47 UTC │ 02 Dec 25 19:48 UTC │
	│ addons  │ addons-375150 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:48 UTC │ 02 Dec 25 19:48 UTC │
	│ ssh     │ addons-375150 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:48 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-375150                                                                                                                                                                                                                                                                                                                                                                                         │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:48 UTC │ 02 Dec 25 19:48 UTC │
	│ addons  │ addons-375150 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:48 UTC │ 02 Dec 25 19:48 UTC │
	│ addons  │ addons-375150 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:48 UTC │ 02 Dec 25 19:48 UTC │
	│ addons  │ addons-375150 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:48 UTC │ 02 Dec 25 19:48 UTC │
	│ ip      │ addons-375150 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-375150        │ jenkins │ v1.37.0 │ 02 Dec 25 19:50 UTC │ 02 Dec 25 19:50 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:44:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:44:59.787540  147999 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:44:59.787653  147999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:44:59.787677  147999 out.go:374] Setting ErrFile to fd 2...
	I1202 19:44:59.787684  147999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:44:59.787912  147999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 19:44:59.788407  147999 out.go:368] Setting JSON to false
	I1202 19:44:59.789259  147999 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5244,"bootTime":1764699456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:44:59.789317  147999 start.go:143] virtualization: kvm guest
	I1202 19:44:59.791168  147999 out.go:179] * [addons-375150] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 19:44:59.792400  147999 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 19:44:59.792427  147999 notify.go:221] Checking for updates...
	I1202 19:44:59.795107  147999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:44:59.796216  147999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 19:44:59.797340  147999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:44:59.798346  147999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 19:44:59.799447  147999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:44:59.800624  147999 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:44:59.832198  147999 out.go:179] * Using the kvm2 driver based on user configuration
	I1202 19:44:59.833400  147999 start.go:309] selected driver: kvm2
	I1202 19:44:59.833414  147999 start.go:927] validating driver "kvm2" against <nil>
	I1202 19:44:59.833425  147999 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:44:59.834183  147999 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 19:44:59.834460  147999 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:44:59.834496  147999 cni.go:84] Creating CNI manager for ""
	I1202 19:44:59.834555  147999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 19:44:59.834567  147999 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 19:44:59.834622  147999 start.go:353] cluster config:
	{Name:addons-375150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-375150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1202 19:44:59.834789  147999 iso.go:125] acquiring lock: {Name:mkfe4a75ba73b1e7a1c7cd55dc23a305917e17a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:44:59.836281  147999 out.go:179] * Starting "addons-375150" primary control-plane node in "addons-375150" cluster
	I1202 19:44:59.837524  147999 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:44:59.837563  147999 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 19:44:59.837578  147999 cache.go:65] Caching tarball of preloaded images
	I1202 19:44:59.837670  147999 preload.go:238] Found /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 19:44:59.837686  147999 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 19:44:59.838044  147999 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/config.json ...
	I1202 19:44:59.838070  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/config.json: {Name:mk0e0d671739365a66e98ee1a83759df016f6fbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:44:59.838226  147999 start.go:360] acquireMachinesLock for addons-375150: {Name:mk87259b3368832a6a6ed41448f2ab0149793b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 19:44:59.838288  147999 start.go:364] duration metric: took 45.775µs to acquireMachinesLock for "addons-375150"
	I1202 19:44:59.838309  147999 start.go:93] Provisioning new machine with config: &{Name:addons-375150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-375150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:44:59.838367  147999 start.go:125] createHost starting for "" (driver="kvm2")
	I1202 19:44:59.839883  147999 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1202 19:44:59.840083  147999 start.go:159] libmachine.API.Create for "addons-375150" (driver="kvm2")
	I1202 19:44:59.840113  147999 client.go:173] LocalClient.Create starting
	I1202 19:44:59.840215  147999 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem
	I1202 19:44:59.989588  147999 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem
	I1202 19:45:00.073016  147999 main.go:143] libmachine: creating domain...
	I1202 19:45:00.073049  147999 main.go:143] libmachine: creating network...
	I1202 19:45:00.075186  147999 main.go:143] libmachine: found existing default network
	I1202 19:45:00.075483  147999 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1202 19:45:00.076090  147999 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0029146e0}
	I1202 19:45:00.076185  147999 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-375150</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1202 19:45:00.082245  147999 main.go:143] libmachine: creating private network mk-addons-375150 192.168.39.0/24...
	I1202 19:45:00.153242  147999 main.go:143] libmachine: private network mk-addons-375150 192.168.39.0/24 created
	I1202 19:45:00.153598  147999 main.go:143] libmachine: <network>
	  <name>mk-addons-375150</name>
	  <uuid>cea36ebf-259b-4d4e-bde6-4a4e3970ba31</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:92:cc:53'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1202 19:45:00.153634  147999 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150 ...
	I1202 19:45:00.153675  147999 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21997-143119/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1202 19:45:00.153694  147999 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:45:00.153775  147999 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21997-143119/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21997-143119/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1202 19:45:00.443438  147999 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa...
	I1202 19:45:00.508907  147999 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/addons-375150.rawdisk...
	I1202 19:45:00.508969  147999 main.go:143] libmachine: Writing magic tar header
	I1202 19:45:00.508993  147999 main.go:143] libmachine: Writing SSH key tar header
	I1202 19:45:00.509068  147999 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150 ...
	I1202 19:45:00.509133  147999 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150
	I1202 19:45:00.509165  147999 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150 (perms=drwx------)
	I1202 19:45:00.509180  147999 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-143119/.minikube/machines
	I1202 19:45:00.509191  147999 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-143119/.minikube/machines (perms=drwxr-xr-x)
	I1202 19:45:00.509202  147999 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:45:00.509213  147999 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-143119/.minikube (perms=drwxr-xr-x)
	I1202 19:45:00.509222  147999 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21997-143119
	I1202 19:45:00.509231  147999 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21997-143119 (perms=drwxrwxr-x)
	I1202 19:45:00.509240  147999 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1202 19:45:00.509248  147999 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1202 19:45:00.509255  147999 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1202 19:45:00.509265  147999 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1202 19:45:00.509275  147999 main.go:143] libmachine: checking permissions on dir: /home
	I1202 19:45:00.509281  147999 main.go:143] libmachine: skipping /home - not owner
	I1202 19:45:00.509287  147999 main.go:143] libmachine: defining domain...
	I1202 19:45:00.510693  147999 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-375150</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/addons-375150.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-375150'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1202 19:45:00.519122  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:ee:05:51 in network default
	I1202 19:45:00.519790  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:00.519809  147999 main.go:143] libmachine: starting domain...
	I1202 19:45:00.519814  147999 main.go:143] libmachine: ensuring networks are active...
	I1202 19:45:00.520677  147999 main.go:143] libmachine: Ensuring network default is active
	I1202 19:45:00.521138  147999 main.go:143] libmachine: Ensuring network mk-addons-375150 is active
	I1202 19:45:00.521947  147999 main.go:143] libmachine: getting domain XML...
	I1202 19:45:00.523135  147999 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-375150</name>
	  <uuid>d6246095-94f3-4cee-908b-7809c660f726</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/addons-375150.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:77:c2:a1'/>
	      <source network='mk-addons-375150'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ee:05:51'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1202 19:45:01.809327  147999 main.go:143] libmachine: waiting for domain to start...
	I1202 19:45:01.810468  147999 main.go:143] libmachine: domain is now running
	I1202 19:45:01.810484  147999 main.go:143] libmachine: waiting for IP...
	I1202 19:45:01.811204  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:01.811576  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:01.811588  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:01.811804  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:01.811844  147999 retry.go:31] will retry after 228.054773ms: waiting for domain to come up
	I1202 19:45:02.041163  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:02.041638  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:02.041682  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:02.042014  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:02.042071  147999 retry.go:31] will retry after 290.188561ms: waiting for domain to come up
	I1202 19:45:02.333376  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:02.333945  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:02.333967  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:02.334376  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:02.334423  147999 retry.go:31] will retry after 464.384859ms: waiting for domain to come up
	I1202 19:45:02.800042  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:02.800640  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:02.800675  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:02.800996  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:02.801045  147999 retry.go:31] will retry after 561.748203ms: waiting for domain to come up
	I1202 19:45:03.364967  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:03.365608  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:03.365625  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:03.365994  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:03.366036  147999 retry.go:31] will retry after 515.893189ms: waiting for domain to come up
	I1202 19:45:03.884032  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:03.884643  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:03.884675  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:03.885021  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:03.885064  147999 retry.go:31] will retry after 706.22008ms: waiting for domain to come up
	I1202 19:45:04.593335  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:04.593982  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:04.594005  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:04.594320  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:04.594364  147999 retry.go:31] will retry after 1.011303929s: waiting for domain to come up
	I1202 19:45:05.607356  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:05.607959  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:05.607981  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:05.608296  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:05.608334  147999 retry.go:31] will retry after 894.678256ms: waiting for domain to come up
	I1202 19:45:06.504750  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:06.505420  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:06.505447  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:06.505880  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:06.505925  147999 retry.go:31] will retry after 1.761809757s: waiting for domain to come up
	I1202 19:45:08.269915  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:08.270400  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:08.270421  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:08.270723  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:08.270769  147999 retry.go:31] will retry after 1.645886527s: waiting for domain to come up
	I1202 19:45:09.918103  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:09.918796  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:09.918829  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:09.919185  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:09.919233  147999 retry.go:31] will retry after 1.848058453s: waiting for domain to come up
	I1202 19:45:11.770034  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:11.770701  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:11.770721  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:11.771060  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:11.771101  147999 retry.go:31] will retry after 2.295647407s: waiting for domain to come up
	I1202 19:45:14.069728  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:14.070286  147999 main.go:143] libmachine: no network interface addresses found for domain addons-375150 (source=lease)
	I1202 19:45:14.070299  147999 main.go:143] libmachine: trying to list again with source=arp
	I1202 19:45:14.070585  147999 main.go:143] libmachine: unable to find current IP address of domain addons-375150 in network mk-addons-375150 (interfaces detected: [])
	I1202 19:45:14.070626  147999 retry.go:31] will retry after 4.532700581s: waiting for domain to come up
	I1202 19:45:18.607942  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:18.608865  147999 main.go:143] libmachine: domain addons-375150 has current primary IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:18.608885  147999 main.go:143] libmachine: found domain IP: 192.168.39.62
	I1202 19:45:18.608894  147999 main.go:143] libmachine: reserving static IP address...
	I1202 19:45:18.609404  147999 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-375150", mac: "52:54:00:77:c2:a1", ip: "192.168.39.62"} in network mk-addons-375150
	I1202 19:45:18.820262  147999 main.go:143] libmachine: reserved static IP address 192.168.39.62 for domain addons-375150
	I1202 19:45:18.820290  147999 main.go:143] libmachine: waiting for SSH...
	I1202 19:45:18.820299  147999 main.go:143] libmachine: Getting to WaitForSSH function...
	I1202 19:45:18.823622  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:18.824115  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:18.824142  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:18.824413  147999 main.go:143] libmachine: Using SSH client type: native
	I1202 19:45:18.824671  147999 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1202 19:45:18.824686  147999 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1202 19:45:18.939543  147999 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:45:18.939964  147999 main.go:143] libmachine: domain creation complete
	I1202 19:45:18.941233  147999 machine.go:94] provisionDockerMachine start ...
	I1202 19:45:18.943483  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:18.943886  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:18.943911  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:18.944070  147999 main.go:143] libmachine: Using SSH client type: native
	I1202 19:45:18.944260  147999 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1202 19:45:18.944270  147999 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 19:45:19.050845  147999 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 19:45:19.050882  147999 buildroot.go:166] provisioning hostname "addons-375150"
	I1202 19:45:19.054302  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.054837  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:19.054863  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.055110  147999 main.go:143] libmachine: Using SSH client type: native
	I1202 19:45:19.055352  147999 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1202 19:45:19.055368  147999 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-375150 && echo "addons-375150" | sudo tee /etc/hostname
	I1202 19:45:19.178895  147999 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-375150
	
	I1202 19:45:19.181817  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.182468  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:19.182497  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.182710  147999 main.go:143] libmachine: Using SSH client type: native
	I1202 19:45:19.182915  147999 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1202 19:45:19.182931  147999 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-375150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-375150/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-375150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 19:45:19.300545  147999 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 19:45:19.300579  147999 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-143119/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-143119/.minikube}
	I1202 19:45:19.300610  147999 buildroot.go:174] setting up certificates
	I1202 19:45:19.300623  147999 provision.go:84] configureAuth start
	I1202 19:45:19.303444  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.303918  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:19.303953  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.306058  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.306354  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:19.306373  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.306500  147999 provision.go:143] copyHostCerts
	I1202 19:45:19.306556  147999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem (1082 bytes)
	I1202 19:45:19.306690  147999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem (1123 bytes)
	I1202 19:45:19.306798  147999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem (1675 bytes)
	I1202 19:45:19.306866  147999 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem org=jenkins.addons-375150 san=[127.0.0.1 192.168.39.62 addons-375150 localhost minikube]
	I1202 19:45:19.394203  147999 provision.go:177] copyRemoteCerts
	I1202 19:45:19.394263  147999 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 19:45:19.396786  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.397304  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:19.397331  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.397468  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:19.480738  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 19:45:19.510684  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 19:45:19.540459  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 19:45:19.568812  147999 provision.go:87] duration metric: took 268.175745ms to configureAuth
	I1202 19:45:19.568842  147999 buildroot.go:189] setting minikube options for container-runtime
	I1202 19:45:19.569028  147999 config.go:182] Loaded profile config "addons-375150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:45:19.571896  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.572412  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:19.572450  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:19.572684  147999 main.go:143] libmachine: Using SSH client type: native
	I1202 19:45:19.572974  147999 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1202 19:45:19.572994  147999 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 19:45:20.137935  147999 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 19:45:20.137970  147999 machine.go:97] duration metric: took 1.196717711s to provisionDockerMachine
	I1202 19:45:20.137981  147999 client.go:176] duration metric: took 20.297858831s to LocalClient.Create
	I1202 19:45:20.138001  147999 start.go:167] duration metric: took 20.297917125s to libmachine.API.Create "addons-375150"
	I1202 19:45:20.138012  147999 start.go:293] postStartSetup for "addons-375150" (driver="kvm2")
	I1202 19:45:20.138027  147999 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 19:45:20.138090  147999 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 19:45:20.141148  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.141545  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:20.141575  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.141762  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:20.226828  147999 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 19:45:20.231791  147999 info.go:137] Remote host: Buildroot 2025.02
	I1202 19:45:20.231825  147999 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/addons for local assets ...
	I1202 19:45:20.231904  147999 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/files for local assets ...
	I1202 19:45:20.231936  147999 start.go:296] duration metric: took 93.914467ms for postStartSetup
	I1202 19:45:20.241000  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.241563  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:20.241604  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.242012  147999 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/config.json ...
	I1202 19:45:20.302685  147999 start.go:128] duration metric: took 20.464267621s to createHost
	I1202 19:45:20.305792  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.306201  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:20.306228  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.306430  147999 main.go:143] libmachine: Using SSH client type: native
	I1202 19:45:20.306720  147999 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1202 19:45:20.306738  147999 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1202 19:45:20.415594  147999 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764704720.390101772
	
	I1202 19:45:20.415625  147999 fix.go:216] guest clock: 1764704720.390101772
	I1202 19:45:20.415636  147999 fix.go:229] Guest: 2025-12-02 19:45:20.390101772 +0000 UTC Remote: 2025-12-02 19:45:20.302734262 +0000 UTC m=+20.563692254 (delta=87.36751ms)
	I1202 19:45:20.415693  147999 fix.go:200] guest clock delta is within tolerance: 87.36751ms
	I1202 19:45:20.415704  147999 start.go:83] releasing machines lock for "addons-375150", held for 20.57740381s
	I1202 19:45:20.418914  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.419385  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:20.419411  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.420113  147999 ssh_runner.go:195] Run: cat /version.json
	I1202 19:45:20.420225  147999 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 19:45:20.423580  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.423680  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.424158  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:20.424196  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.424266  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:20.424293  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:20.424469  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:20.424626  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:20.503461  147999 ssh_runner.go:195] Run: systemctl --version
	I1202 19:45:20.539349  147999 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 19:45:20.699948  147999 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 19:45:20.707020  147999 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 19:45:20.707126  147999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 19:45:20.729025  147999 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 19:45:20.729054  147999 start.go:496] detecting cgroup driver to use...
	I1202 19:45:20.729131  147999 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 19:45:20.748512  147999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 19:45:20.767457  147999 docker.go:218] disabling cri-docker service (if available) ...
	I1202 19:45:20.767543  147999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 19:45:20.786223  147999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 19:45:20.803508  147999 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 19:45:20.943585  147999 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 19:45:21.138892  147999 docker.go:234] disabling docker service ...
	I1202 19:45:21.138957  147999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 19:45:21.154723  147999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 19:45:21.168745  147999 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 19:45:21.314588  147999 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 19:45:21.449524  147999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 19:45:21.464900  147999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 19:45:21.486411  147999 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 19:45:21.486478  147999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:45:21.498695  147999 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 19:45:21.498780  147999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:45:21.511399  147999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:45:21.523137  147999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:45:21.535201  147999 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 19:45:21.547915  147999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:45:21.560374  147999 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:45:21.580833  147999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 19:45:21.593002  147999 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 19:45:21.602963  147999 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 19:45:21.603031  147999 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 19:45:21.625187  147999 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 19:45:21.637210  147999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:45:21.773894  147999 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 19:45:21.889817  147999 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 19:45:21.889945  147999 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 19:45:21.895367  147999 start.go:564] Will wait 60s for crictl version
	I1202 19:45:21.895441  147999 ssh_runner.go:195] Run: which crictl
	I1202 19:45:21.899581  147999 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 19:45:21.935678  147999 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 19:45:21.935797  147999 ssh_runner.go:195] Run: crio --version
	I1202 19:45:21.964652  147999 ssh_runner.go:195] Run: crio --version
	I1202 19:45:21.995944  147999 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1202 19:45:21.999871  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:22.000394  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:22.000420  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:22.000588  147999 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 19:45:22.005208  147999 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:45:22.026685  147999 kubeadm.go:884] updating cluster {Name:addons-375150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-375150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 19:45:22.026805  147999 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:45:22.026874  147999 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:45:22.062010  147999 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1202 19:45:22.062111  147999 ssh_runner.go:195] Run: which lz4
	I1202 19:45:22.066758  147999 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 19:45:22.071211  147999 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 19:45:22.071241  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1202 19:45:23.288018  147999 crio.go:462] duration metric: took 1.221321716s to copy over tarball
	I1202 19:45:23.288097  147999 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 19:45:24.743697  147999 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.455515266s)
	I1202 19:45:24.743728  147999 crio.go:469] duration metric: took 1.455681641s to extract the tarball
	I1202 19:45:24.743737  147999 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 19:45:24.779850  147999 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 19:45:24.822755  147999 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 19:45:24.822783  147999 cache_images.go:86] Images are preloaded, skipping loading
	I1202 19:45:24.822791  147999 kubeadm.go:935] updating node { 192.168.39.62 8443 v1.34.2 crio true true} ...
	I1202 19:45:24.822897  147999 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-375150 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-375150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 19:45:24.822968  147999 ssh_runner.go:195] Run: crio config
	I1202 19:45:24.870576  147999 cni.go:84] Creating CNI manager for ""
	I1202 19:45:24.870610  147999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 19:45:24.870635  147999 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 19:45:24.870688  147999 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.62 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-375150 NodeName:addons-375150 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 19:45:24.870885  147999 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-375150"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.62"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.62"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 19:45:24.870978  147999 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 19:45:24.883816  147999 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 19:45:24.883904  147999 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 19:45:24.896200  147999 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1202 19:45:24.918104  147999 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 19:45:24.940408  147999 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1202 19:45:24.961763  147999 ssh_runner.go:195] Run: grep 192.168.39.62	control-plane.minikube.internal$ /etc/hosts
	I1202 19:45:24.966175  147999 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 19:45:24.981263  147999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:45:25.122078  147999 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:45:25.143252  147999 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150 for IP: 192.168.39.62
	I1202 19:45:25.143279  147999 certs.go:195] generating shared ca certs ...
	I1202 19:45:25.143297  147999 certs.go:227] acquiring lock for ca certs: {Name:mk4d0a32f0604330372f61cbe35af2ea6f3b6c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.143435  147999 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key
	I1202 19:45:25.181236  147999 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt ...
	I1202 19:45:25.181269  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt: {Name:mkbfb89d8fd54a2502214ea17f8cc56d8d4bac29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.181455  147999 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key ...
	I1202 19:45:25.181467  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key: {Name:mk3639d808d6aadb0097499dc254610c97c8ee87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.181539  147999 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key
	I1202 19:45:25.236084  147999 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.crt ...
	I1202 19:45:25.236115  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.crt: {Name:mk77d1cfc429a5dd6aea2d699be210d969193b95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.236277  147999 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key ...
	I1202 19:45:25.236288  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key: {Name:mk4dc4707ac483bb9d8043cb8eede4f99ccb6756 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.236355  147999 certs.go:257] generating profile certs ...
	I1202 19:45:25.236412  147999 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.key
	I1202 19:45:25.236427  147999 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt with IP's: []
	I1202 19:45:25.309042  147999 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt ...
	I1202 19:45:25.309076  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: {Name:mkf378dbc67efe521cc319b0aec2a772513e58f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.309255  147999 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.key ...
	I1202 19:45:25.309269  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.key: {Name:mk61b807da372396c1c62aea07cfc3fd8b4b5e72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.309339  147999 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.key.2016d8a2
	I1202 19:45:25.309358  147999 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.crt.2016d8a2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.62]
	I1202 19:45:25.325992  147999 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.crt.2016d8a2 ...
	I1202 19:45:25.326020  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.crt.2016d8a2: {Name:mka97f22474f0f824ead6ca7a18f773588ea1784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.326180  147999 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.key.2016d8a2 ...
	I1202 19:45:25.326193  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.key.2016d8a2: {Name:mk65e5bc6fc03b8e9b1614dc0a31f09442c09387 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.326262  147999 certs.go:382] copying /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.crt.2016d8a2 -> /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.crt
	I1202 19:45:25.326332  147999 certs.go:386] copying /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.key.2016d8a2 -> /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.key
	I1202 19:45:25.326380  147999 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/proxy-client.key
	I1202 19:45:25.326398  147999 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/proxy-client.crt with IP's: []
	I1202 19:45:25.559310  147999 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/proxy-client.crt ...
	I1202 19:45:25.559342  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/proxy-client.crt: {Name:mk6985e90c22fe1fe51a25d82e2c5c68d34f9e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.559505  147999 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/proxy-client.key ...
	I1202 19:45:25.559517  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/proxy-client.key: {Name:mk8eba8772500849c7a5661595c1f187d782a4ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:25.559704  147999 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 19:45:25.559744  147999 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem (1082 bytes)
	I1202 19:45:25.559766  147999 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem (1123 bytes)
	I1202 19:45:25.559791  147999 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem (1675 bytes)
	I1202 19:45:25.560362  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 19:45:25.590905  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 19:45:25.621831  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 19:45:25.654455  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 19:45:25.684192  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 19:45:25.713569  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 19:45:25.742817  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 19:45:25.772588  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 19:45:25.804192  147999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 19:45:25.851308  147999 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 19:45:25.874788  147999 ssh_runner.go:195] Run: openssl version
	I1202 19:45:25.883589  147999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 19:45:25.897540  147999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:45:25.903143  147999 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:45 /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:45:25.903213  147999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 19:45:25.910749  147999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 19:45:25.924602  147999 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 19:45:25.929429  147999 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 19:45:25.929501  147999 kubeadm.go:401] StartCluster: {Name:addons-375150 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-375150 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:45:25.929601  147999 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 19:45:25.929687  147999 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 19:45:25.964352  147999 cri.go:89] found id: ""
	I1202 19:45:25.964440  147999 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 19:45:25.978013  147999 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 19:45:25.990467  147999 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 19:45:26.002583  147999 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 19:45:26.002601  147999 kubeadm.go:158] found existing configuration files:
	
	I1202 19:45:26.002666  147999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 19:45:26.014152  147999 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 19:45:26.014220  147999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 19:45:26.026867  147999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 19:45:26.038082  147999 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 19:45:26.038153  147999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 19:45:26.050943  147999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 19:45:26.061485  147999 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 19:45:26.061548  147999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 19:45:26.073148  147999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 19:45:26.084464  147999 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 19:45:26.084526  147999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 19:45:26.096827  147999 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 19:45:26.249722  147999 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 19:45:37.524421  147999 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1202 19:45:37.524473  147999 kubeadm.go:319] [preflight] Running pre-flight checks
	I1202 19:45:37.524538  147999 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1202 19:45:37.524617  147999 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1202 19:45:37.524716  147999 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1202 19:45:37.524768  147999 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1202 19:45:37.526314  147999 out.go:252]   - Generating certificates and keys ...
	I1202 19:45:37.526385  147999 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1202 19:45:37.526436  147999 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1202 19:45:37.526490  147999 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1202 19:45:37.526565  147999 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1202 19:45:37.526628  147999 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1202 19:45:37.526711  147999 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1202 19:45:37.526768  147999 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1202 19:45:37.526940  147999 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-375150 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I1202 19:45:37.527014  147999 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1202 19:45:37.527178  147999 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-375150 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I1202 19:45:37.527274  147999 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1202 19:45:37.527368  147999 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1202 19:45:37.527414  147999 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1202 19:45:37.527471  147999 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1202 19:45:37.527515  147999 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1202 19:45:37.527564  147999 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1202 19:45:37.527645  147999 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1202 19:45:37.527733  147999 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1202 19:45:37.527793  147999 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1202 19:45:37.527897  147999 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1202 19:45:37.527980  147999 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1202 19:45:37.529093  147999 out.go:252]   - Booting up control plane ...
	I1202 19:45:37.529189  147999 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1202 19:45:37.529287  147999 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1202 19:45:37.529380  147999 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1202 19:45:37.529520  147999 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1202 19:45:37.529607  147999 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1202 19:45:37.529731  147999 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1202 19:45:37.529799  147999 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1202 19:45:37.529832  147999 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1202 19:45:37.529992  147999 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1202 19:45:37.530094  147999 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1202 19:45:37.530162  147999 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001965495s
	I1202 19:45:37.530307  147999 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1202 19:45:37.530400  147999 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.62:8443/livez
	I1202 19:45:37.530534  147999 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1202 19:45:37.530647  147999 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1202 19:45:37.530784  147999 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.509257489s
	I1202 19:45:37.530899  147999 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.429771551s
	I1202 19:45:37.531004  147999 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.50242072s
	I1202 19:45:37.531215  147999 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 19:45:37.531412  147999 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 19:45:37.531498  147999 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 19:45:37.531769  147999 kubeadm.go:319] [mark-control-plane] Marking the node addons-375150 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 19:45:37.531844  147999 kubeadm.go:319] [bootstrap-token] Using token: a6eutc.ju7lc010w4v7cljz
	I1202 19:45:37.533067  147999 out.go:252]   - Configuring RBAC rules ...
	I1202 19:45:37.533186  147999 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 19:45:37.533281  147999 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 19:45:37.533403  147999 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 19:45:37.533539  147999 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 19:45:37.533679  147999 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 19:45:37.533765  147999 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 19:45:37.533921  147999 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 19:45:37.533964  147999 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 19:45:37.534004  147999 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 19:45:37.534009  147999 kubeadm.go:319] 
	I1202 19:45:37.534071  147999 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 19:45:37.534084  147999 kubeadm.go:319] 
	I1202 19:45:37.534149  147999 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 19:45:37.534155  147999 kubeadm.go:319] 
	I1202 19:45:37.534185  147999 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 19:45:37.534267  147999 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 19:45:37.534321  147999 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 19:45:37.534329  147999 kubeadm.go:319] 
	I1202 19:45:37.534379  147999 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 19:45:37.534388  147999 kubeadm.go:319] 
	I1202 19:45:37.534425  147999 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 19:45:37.534431  147999 kubeadm.go:319] 
	I1202 19:45:37.534475  147999 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 19:45:37.534541  147999 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 19:45:37.534607  147999 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 19:45:37.534613  147999 kubeadm.go:319] 
	I1202 19:45:37.534699  147999 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 19:45:37.534808  147999 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 19:45:37.534822  147999 kubeadm.go:319] 
	I1202 19:45:37.534944  147999 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token a6eutc.ju7lc010w4v7cljz \
	I1202 19:45:37.535105  147999 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:164b9536bcfe41c4174c32548d219b78812180977735903d1dc928867094e350 \
	I1202 19:45:37.535127  147999 kubeadm.go:319] 	--control-plane 
	I1202 19:45:37.535131  147999 kubeadm.go:319] 
	I1202 19:45:37.535206  147999 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 19:45:37.535211  147999 kubeadm.go:319] 
	I1202 19:45:37.535279  147999 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token a6eutc.ju7lc010w4v7cljz \
	I1202 19:45:37.535395  147999 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:164b9536bcfe41c4174c32548d219b78812180977735903d1dc928867094e350 
	I1202 19:45:37.535419  147999 cni.go:84] Creating CNI manager for ""
	I1202 19:45:37.535425  147999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 19:45:37.536575  147999 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 19:45:37.537511  147999 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 19:45:37.549785  147999 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 19:45:37.573589  147999 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 19:45:37.573719  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:37.573748  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-375150 minikube.k8s.io/updated_at=2025_12_02T19_45_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=addons-375150 minikube.k8s.io/primary=true
	I1202 19:45:37.627121  147999 ops.go:34] apiserver oom_adj: -16
	I1202 19:45:37.755280  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:38.256204  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:38.755710  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:39.255519  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:39.756079  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:40.255972  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:40.756053  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:41.255347  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:41.755412  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:42.255572  147999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 19:45:42.342006  147999 kubeadm.go:1114] duration metric: took 4.76828112s to wait for elevateKubeSystemPrivileges
	I1202 19:45:42.342061  147999 kubeadm.go:403] duration metric: took 16.412568683s to StartCluster
	I1202 19:45:42.342090  147999 settings.go:142] acquiring lock: {Name:mka4c337368f188b532e41dc38505f24fc351556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:42.342213  147999 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 19:45:42.342631  147999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/kubeconfig: {Name:mk45f2610791f17b0d78039ad0468591c7331759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 19:45:42.342931  147999 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 19:45:42.342985  147999 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 19:45:42.343054  147999 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1202 19:45:42.343215  147999 addons.go:70] Setting yakd=true in profile "addons-375150"
	I1202 19:45:42.343239  147999 addons.go:239] Setting addon yakd=true in "addons-375150"
	I1202 19:45:42.343259  147999 addons.go:70] Setting inspektor-gadget=true in profile "addons-375150"
	I1202 19:45:42.343294  147999 addons.go:70] Setting registry-creds=true in profile "addons-375150"
	I1202 19:45:42.343217  147999 config.go:182] Loaded profile config "addons-375150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:45:42.343304  147999 addons.go:70] Setting cloud-spanner=true in profile "addons-375150"
	I1202 19:45:42.343319  147999 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-375150"
	I1202 19:45:42.343324  147999 addons.go:70] Setting volcano=true in profile "addons-375150"
	I1202 19:45:42.343327  147999 addons.go:70] Setting ingress=true in profile "addons-375150"
	I1202 19:45:42.343341  147999 addons.go:239] Setting addon ingress=true in "addons-375150"
	I1202 19:45:42.343348  147999 addons.go:239] Setting addon cloud-spanner=true in "addons-375150"
	I1202 19:45:42.343352  147999 addons.go:239] Setting addon volcano=true in "addons-375150"
	I1202 19:45:42.343349  147999 addons.go:70] Setting ingress-dns=true in profile "addons-375150"
	I1202 19:45:42.343359  147999 addons.go:70] Setting volumesnapshots=true in profile "addons-375150"
	I1202 19:45:42.343321  147999 addons.go:70] Setting gcp-auth=true in profile "addons-375150"
	I1202 19:45:42.343369  147999 addons.go:239] Setting addon ingress-dns=true in "addons-375150"
	I1202 19:45:42.343372  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343373  147999 addons.go:239] Setting addon volumesnapshots=true in "addons-375150"
	I1202 19:45:42.343376  147999 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-375150"
	I1202 19:45:42.343382  147999 mustload.go:66] Loading cluster: addons-375150
	I1202 19:45:42.343385  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343396  147999 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-375150"
	I1202 19:45:42.343397  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343306  147999 addons.go:239] Setting addon registry-creds=true in "addons-375150"
	I1202 19:45:42.343416  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343422  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343379  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343397  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343588  147999 config.go:182] Loaded profile config "addons-375150": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:45:42.343274  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343315  147999 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-375150"
	I1202 19:45:42.344854  147999 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-375150"
	I1202 19:45:42.343311  147999 addons.go:239] Setting addon inspektor-gadget=true in "addons-375150"
	I1202 19:45:42.344923  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343363  147999 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-375150"
	I1202 19:45:42.345182  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343299  147999 addons.go:70] Setting default-storageclass=true in profile "addons-375150"
	I1202 19:45:42.345240  147999 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-375150"
	I1202 19:45:42.343288  147999 addons.go:70] Setting registry=true in profile "addons-375150"
	I1202 19:45:42.345564  147999 addons.go:239] Setting addon registry=true in "addons-375150"
	I1202 19:45:42.345596  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.345645  147999 out.go:179] * Verifying Kubernetes components...
	I1202 19:45:42.343280  147999 addons.go:70] Setting metrics-server=true in profile "addons-375150"
	I1202 19:45:42.345717  147999 addons.go:239] Setting addon metrics-server=true in "addons-375150"
	I1202 19:45:42.345752  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.343310  147999 addons.go:70] Setting storage-provisioner=true in profile "addons-375150"
	I1202 19:45:42.345821  147999 addons.go:239] Setting addon storage-provisioner=true in "addons-375150"
	I1202 19:45:42.343284  147999 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-375150"
	I1202 19:45:42.345975  147999 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-375150"
	I1202 19:45:42.346006  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.345888  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.347293  147999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 19:45:42.350214  147999 host.go:66] Checking if "addons-375150" exists ...
	W1202 19:45:42.350453  147999 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1202 19:45:42.351780  147999 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1202 19:45:42.351790  147999 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1202 19:45:42.352710  147999 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1202 19:45:42.352738  147999 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1202 19:45:42.352715  147999 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1202 19:45:42.352710  147999 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1202 19:45:42.353268  147999 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-375150"
	I1202 19:45:42.354351  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.353543  147999 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1202 19:45:42.353553  147999 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1202 19:45:42.354511  147999 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1202 19:45:42.354750  147999 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 19:45:42.354746  147999 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 19:45:42.354766  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1202 19:45:42.354771  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1202 19:45:42.353736  147999 addons.go:239] Setting addon default-storageclass=true in "addons-375150"
	I1202 19:45:42.354853  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:42.354888  147999 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1202 19:45:42.354898  147999 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1202 19:45:42.355509  147999 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1202 19:45:42.355512  147999 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1202 19:45:42.355942  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1202 19:45:42.356325  147999 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 19:45:42.356339  147999 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1202 19:45:42.356395  147999 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 19:45:42.356416  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1202 19:45:42.356418  147999 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 19:45:42.356473  147999 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1202 19:45:42.356399  147999 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1202 19:45:42.357189  147999 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1202 19:45:42.356326  147999 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1202 19:45:42.357206  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1202 19:45:42.358108  147999 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:45:42.358125  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 19:45:42.358110  147999 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 19:45:42.358244  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1202 19:45:42.358839  147999 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1202 19:45:42.358857  147999 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1202 19:45:42.358856  147999 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1202 19:45:42.359490  147999 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1202 19:45:42.359870  147999 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 19:45:42.359888  147999 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 19:45:42.359489  147999 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 19:45:42.360260  147999 out.go:179]   - Using image docker.io/registry:3.0.0
	I1202 19:45:42.361978  147999 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1202 19:45:42.362066  147999 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1202 19:45:42.362088  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1202 19:45:42.362245  147999 out.go:179]   - Using image docker.io/busybox:stable
	I1202 19:45:42.362415  147999 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 19:45:42.362431  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1202 19:45:42.363156  147999 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 19:45:42.363172  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1202 19:45:42.363229  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.364274  147999 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1202 19:45:42.364960  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.364991  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.365043  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.366117  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.366435  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.366754  147999 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1202 19:45:42.368242  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.368291  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.368334  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.369033  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.369113  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.369124  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.369372  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.369691  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.369927  147999 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1202 19:45:42.369987  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.370026  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.370134  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.370646  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.370713  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.370721  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.371367  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.371394  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.371420  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.371690  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.371866  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.372576  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.372644  147999 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1202 19:45:42.372979  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.373165  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.373202  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.373545  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.373580  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.373681  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.374126  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.374159  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.374249  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.374280  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.374819  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.375078  147999 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1202 19:45:42.375131  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.375392  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.375447  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.375678  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.375716  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.375719  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.375872  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.375890  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.376036  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.376057  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.376362  147999 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1202 19:45:42.376389  147999 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1202 19:45:42.376521  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.376558  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.376570  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.376571  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.376597  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.376611  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.376915  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.376944  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.376967  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:42.379492  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.379966  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:42.379990  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:42.380150  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	W1202 19:45:42.639151  147999 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47756->192.168.39.62:22: read: connection reset by peer
	I1202 19:45:42.639185  147999 retry.go:31] will retry after 366.046056ms: ssh: handshake failed: read tcp 192.168.39.1:47756->192.168.39.62:22: read: connection reset by peer
	W1202 19:45:42.673790  147999 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47768->192.168.39.62:22: read: connection reset by peer
	I1202 19:45:42.673823  147999 retry.go:31] will retry after 213.403653ms: ssh: handshake failed: read tcp 192.168.39.1:47768->192.168.39.62:22: read: connection reset by peer
	I1202 19:45:42.797417  147999 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 19:45:42.797440  147999 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 19:45:43.003256  147999 node_ready.go:35] waiting up to 6m0s for node "addons-375150" to be "Ready" ...
	I1202 19:45:43.007020  147999 node_ready.go:49] node "addons-375150" is "Ready"
	I1202 19:45:43.007048  147999 node_ready.go:38] duration metric: took 3.759348ms for node "addons-375150" to be "Ready" ...
	I1202 19:45:43.007067  147999 api_server.go:52] waiting for apiserver process to appear ...
	I1202 19:45:43.007111  147999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 19:45:43.081794  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1202 19:45:43.087231  147999 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1202 19:45:43.087261  147999 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1202 19:45:43.202112  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1202 19:45:43.203736  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1202 19:45:43.359531  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 19:45:43.395750  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1202 19:45:43.472538  147999 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1202 19:45:43.472565  147999 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1202 19:45:43.509005  147999 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1202 19:45:43.509049  147999 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1202 19:45:43.536869  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1202 19:45:43.546357  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1202 19:45:43.551112  147999 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1202 19:45:43.551143  147999 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1202 19:45:43.569812  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1202 19:45:43.706938  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1202 19:45:43.742198  147999 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1202 19:45:43.742227  147999 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1202 19:45:43.878119  147999 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1202 19:45:43.878151  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1202 19:45:43.980790  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 19:45:44.004430  147999 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1202 19:45:44.004461  147999 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1202 19:45:44.013545  147999 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1202 19:45:44.013576  147999 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1202 19:45:44.049349  147999 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1202 19:45:44.049375  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1202 19:45:44.109544  147999 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1202 19:45:44.109570  147999 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1202 19:45:44.367365  147999 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1202 19:45:44.367394  147999 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1202 19:45:44.448479  147999 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1202 19:45:44.448513  147999 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1202 19:45:44.449161  147999 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1202 19:45:44.449186  147999 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1202 19:45:44.492742  147999 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1202 19:45:44.492765  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1202 19:45:44.503225  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1202 19:45:44.753275  147999 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1202 19:45:44.753307  147999 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1202 19:45:44.803890  147999 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 19:45:44.803919  147999 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1202 19:45:44.891067  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1202 19:45:44.910151  147999 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1202 19:45:44.910188  147999 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1202 19:45:45.119699  147999 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 19:45:45.119724  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1202 19:45:45.176642  147999 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1202 19:45:45.176691  147999 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1202 19:45:45.226894  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1202 19:45:45.338847  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 19:45:45.630772  147999 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1202 19:45:45.630796  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1202 19:45:45.646774  147999 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.849305733s)
	I1202 19:45:45.646806  147999 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1202 19:45:45.646889  147999 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.639757576s)
	I1202 19:45:45.646931  147999 api_server.go:72] duration metric: took 3.303915313s to wait for apiserver process to appear ...
	I1202 19:45:45.646942  147999 api_server.go:88] waiting for apiserver healthz status ...
	I1202 19:45:45.646965  147999 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I1202 19:45:45.646931  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.565106527s)
	I1202 19:45:45.647008  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.44486483s)
	I1202 19:45:45.682102  147999 api_server.go:279] https://192.168.39.62:8443/healthz returned 200:
	ok
	I1202 19:45:45.711926  147999 api_server.go:141] control plane version: v1.34.2
	I1202 19:45:45.711962  147999 api_server.go:131] duration metric: took 65.011053ms to wait for apiserver health ...
	I1202 19:45:45.711981  147999 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 19:45:45.755251  147999 system_pods.go:59] 10 kube-system pods found
	I1202 19:45:45.755310  147999 system_pods.go:61] "amd-gpu-device-plugin-rxk7z" [54b6cdd2-2cba-4f34-8f6e-97404e05daa0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 19:45:45.755324  147999 system_pods.go:61] "coredns-66bc5c9577-6m2rj" [5baa3358-504a-44f5-a1b4-a4935763d8a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:45:45.755333  147999 system_pods.go:61] "coredns-66bc5c9577-hfcgk" [c12714b8-035b-457f-a5a3-3aeeea534546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:45:45.755345  147999 system_pods.go:61] "etcd-addons-375150" [f10d508a-9cff-4354-82a7-5e647d565353] Running
	I1202 19:45:45.755353  147999 system_pods.go:61] "kube-apiserver-addons-375150" [afaad7da-7bbe-4f18-9c6c-c37c5c2f810b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:45:45.755366  147999 system_pods.go:61] "kube-controller-manager-addons-375150" [7b3f7c56-b719-402f-98f4-e766cfb7f312] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:45:45.755372  147999 system_pods.go:61] "kube-proxy-djl9q" [0561e2a2-6a79-4061-9d79-8184acaaf5a9] Running
	I1202 19:45:45.755380  147999 system_pods.go:61] "kube-scheduler-addons-375150" [eeab9774-2372-40a1-951c-acc424749a93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 19:45:45.755387  147999 system_pods.go:61] "nvidia-device-plugin-daemonset-ndzsk" [eef114d3-d8df-4458-a3b5-b2bb9455b793] Pending
	I1202 19:45:45.755395  147999 system_pods.go:61] "registry-creds-764b6fb674-n9qwg" [9d3299e9-bd2b-41cc-ac08-734dfca5f39d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:45:45.755409  147999 system_pods.go:74] duration metric: took 43.4207ms to wait for pod list to return data ...
	I1202 19:45:45.755421  147999 default_sa.go:34] waiting for default service account to be created ...
	I1202 19:45:45.790059  147999 default_sa.go:45] found service account: "default"
	I1202 19:45:45.790087  147999 default_sa.go:55] duration metric: took 34.65961ms for default service account to be created ...
	I1202 19:45:45.790097  147999 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 19:45:45.858128  147999 system_pods.go:86] 10 kube-system pods found
	I1202 19:45:45.858160  147999 system_pods.go:89] "amd-gpu-device-plugin-rxk7z" [54b6cdd2-2cba-4f34-8f6e-97404e05daa0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1202 19:45:45.858167  147999 system_pods.go:89] "coredns-66bc5c9577-6m2rj" [5baa3358-504a-44f5-a1b4-a4935763d8a1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:45:45.858175  147999 system_pods.go:89] "coredns-66bc5c9577-hfcgk" [c12714b8-035b-457f-a5a3-3aeeea534546] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 19:45:45.858179  147999 system_pods.go:89] "etcd-addons-375150" [f10d508a-9cff-4354-82a7-5e647d565353] Running
	I1202 19:45:45.858185  147999 system_pods.go:89] "kube-apiserver-addons-375150" [afaad7da-7bbe-4f18-9c6c-c37c5c2f810b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 19:45:45.858192  147999 system_pods.go:89] "kube-controller-manager-addons-375150" [7b3f7c56-b719-402f-98f4-e766cfb7f312] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 19:45:45.858195  147999 system_pods.go:89] "kube-proxy-djl9q" [0561e2a2-6a79-4061-9d79-8184acaaf5a9] Running
	I1202 19:45:45.858200  147999 system_pods.go:89] "kube-scheduler-addons-375150" [eeab9774-2372-40a1-951c-acc424749a93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 19:45:45.858205  147999 system_pods.go:89] "nvidia-device-plugin-daemonset-ndzsk" [eef114d3-d8df-4458-a3b5-b2bb9455b793] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1202 19:45:45.858211  147999 system_pods.go:89] "registry-creds-764b6fb674-n9qwg" [9d3299e9-bd2b-41cc-ac08-734dfca5f39d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1202 19:45:45.858218  147999 system_pods.go:126] duration metric: took 68.11588ms to wait for k8s-apps to be running ...
	I1202 19:45:45.858225  147999 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 19:45:45.858272  147999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 19:45:46.150323  147999 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-375150" context rescaled to 1 replicas
	I1202 19:45:46.258070  147999 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1202 19:45:46.258099  147999 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1202 19:45:46.740027  147999 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1202 19:45:46.740058  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1202 19:45:46.986811  147999 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1202 19:45:46.986845  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1202 19:45:47.271062  147999 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 19:45:47.271092  147999 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1202 19:45:47.788760  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1202 19:45:47.806349  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.602574328s)
	I1202 19:45:47.923335  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.563765364s)
	I1202 19:45:47.923390  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.527593431s)
	I1202 19:45:47.923424  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.386519122s)
	I1202 19:45:49.784741  147999 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1202 19:45:49.788330  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:49.789134  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:49.789165  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:49.789414  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:50.060723  147999 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1202 19:45:50.155114  147999 addons.go:239] Setting addon gcp-auth=true in "addons-375150"
	I1202 19:45:50.155173  147999 host.go:66] Checking if "addons-375150" exists ...
	I1202 19:45:50.157213  147999 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1202 19:45:50.159753  147999 main.go:143] libmachine: domain addons-375150 has defined MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:50.160203  147999 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:77:c2:a1", ip: ""} in network mk-addons-375150: {Iface:virbr1 ExpiryTime:2025-12-02 20:45:15 +0000 UTC Type:0 Mac:52:54:00:77:c2:a1 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:addons-375150 Clientid:01:52:54:00:77:c2:a1}
	I1202 19:45:50.160231  147999 main.go:143] libmachine: domain addons-375150 has defined IP address 192.168.39.62 and MAC address 52:54:00:77:c2:a1 in network mk-addons-375150
	I1202 19:45:50.160376  147999 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/addons-375150/id_rsa Username:docker}
	I1202 19:45:51.658451  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.112049411s)
	I1202 19:45:51.658504  147999 addons.go:495] Verifying addon ingress=true in "addons-375150"
	I1202 19:45:51.658530  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.088686441s)
	I1202 19:45:51.658651  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.67783529s)
	I1202 19:45:51.658718  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.155460458s)
	I1202 19:45:51.658745  147999 addons.go:495] Verifying addon registry=true in "addons-375150"
	I1202 19:45:51.658757  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.767656113s)
	I1202 19:45:51.658819  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.431890703s)
	I1202 19:45:51.658843  147999 addons.go:495] Verifying addon metrics-server=true in "addons-375150"
	I1202 19:45:51.658623  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.951648081s)
	I1202 19:45:51.658932  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.320036159s)
	W1202 19:45:51.658970  147999 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 19:45:51.658974  147999 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.800685303s)
	I1202 19:45:51.659455  147999 system_svc.go:56] duration metric: took 5.801223846s WaitForService to wait for kubelet
	I1202 19:45:51.659468  147999 kubeadm.go:587] duration metric: took 9.316453025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 19:45:51.659488  147999 node_conditions.go:102] verifying NodePressure condition ...
	I1202 19:45:51.658994  147999 retry.go:31] will retry after 164.198675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1202 19:45:51.660825  147999 out.go:179] * Verifying registry addon...
	I1202 19:45:51.660836  147999 out.go:179] * Verifying ingress addon...
	I1202 19:45:51.661785  147999 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-375150 service yakd-dashboard -n yakd-dashboard
	
	I1202 19:45:51.663358  147999 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1202 19:45:51.663373  147999 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1202 19:45:51.717760  147999 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1202 19:45:51.717791  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:51.718062  147999 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 19:45:51.718096  147999 node_conditions.go:123] node cpu capacity is 2
	I1202 19:45:51.718115  147999 node_conditions.go:105] duration metric: took 58.621334ms to run NodePressure ...
	I1202 19:45:51.718131  147999 start.go:242] waiting for startup goroutines ...
	I1202 19:45:51.718073  147999 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1202 19:45:51.718142  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1202 19:45:51.746329  147999 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1202 19:45:51.824556  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1202 19:45:52.171787  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:52.173074  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:52.670525  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:52.684200  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:52.733631  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.944814149s)
	I1202 19:45:52.733677  147999 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.57642352s)
	I1202 19:45:52.733694  147999 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-375150"
	I1202 19:45:52.734835  147999 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1202 19:45:52.734836  147999 out.go:179] * Verifying csi-hostpath-driver addon...
	I1202 19:45:52.736042  147999 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1202 19:45:52.736992  147999 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1202 19:45:52.737355  147999 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1202 19:45:52.737369  147999 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1202 19:45:52.774375  147999 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1202 19:45:52.774397  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:52.856241  147999 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1202 19:45:52.856273  147999 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1202 19:45:52.928331  147999 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 19:45:52.928353  147999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1202 19:45:53.001065  147999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1202 19:45:53.173720  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:53.173744  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:53.273371  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:53.554968  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.730350616s)
	I1202 19:45:53.671328  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:53.672063  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:53.745013  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:54.195479  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:54.195902  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:54.271540  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:54.308575  147999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.307464768s)
	I1202 19:45:54.309650  147999 addons.go:495] Verifying addon gcp-auth=true in "addons-375150"
	I1202 19:45:54.311592  147999 out.go:179] * Verifying gcp-auth addon...
	I1202 19:45:54.313866  147999 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1202 19:45:54.347809  147999 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1202 19:45:54.347830  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:54.675092  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:54.676769  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:54.744269  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:54.823312  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:55.170037  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:55.172157  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:55.272834  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:55.372411  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:55.666921  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:55.667052  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:55.742275  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:55.817917  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:56.168142  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:56.168413  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:56.241083  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:56.317482  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:56.668209  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:56.668208  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:56.740466  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:56.817642  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:57.167380  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:57.168134  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:57.240469  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:57.320530  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:57.670111  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:57.670671  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:57.743109  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:57.818762  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:58.168980  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:58.169012  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:58.242044  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:58.317430  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:58.670940  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:58.670979  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:58.744591  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:58.819439  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:59.170810  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:59.170931  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:59.242138  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:59.317525  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:45:59.670284  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:45:59.670449  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:45:59.741298  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:45:59.817853  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:00.173059  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:00.173340  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:00.243988  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:00.318306  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:00.752473  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:00.753470  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:00.755229  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:00.818371  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:01.167312  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:01.167342  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:01.240876  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:01.319193  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:01.667292  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:01.668340  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:01.742485  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:01.817827  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:02.168541  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:02.168722  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:02.241670  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:02.317971  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:02.666856  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:02.669716  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:02.741565  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:02.818536  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:03.167330  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:03.168313  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:03.240930  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:03.318196  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:03.671065  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:03.671597  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:03.741553  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:03.818053  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:04.169721  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:04.170178  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:04.243546  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:04.321160  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:04.666418  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:04.666860  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:04.741331  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:04.819649  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:05.167777  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:05.167906  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:05.242296  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:05.318530  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:05.673853  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:05.674189  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:05.977859  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:05.979165  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:06.168462  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:06.168458  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:06.244321  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:06.319235  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:06.666421  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:06.666868  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:06.743074  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:06.842479  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:07.166685  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:07.168318  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:07.242151  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:07.343015  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:07.668095  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:07.668090  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:07.740494  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:07.819110  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:08.167175  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:08.167463  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:08.242433  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:08.321916  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:08.667325  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:08.669054  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:08.741797  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:08.818188  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:09.169783  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:09.169901  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:09.243698  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:09.319866  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:09.669904  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:09.670456  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:09.741317  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:09.821522  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:10.171698  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:10.174239  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:10.244475  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:10.319454  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:10.673828  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:10.674192  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:10.743327  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:10.819173  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:11.172268  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:11.173740  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:11.242291  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:11.317475  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:11.668840  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:11.669077  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:11.742344  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:11.819539  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:12.167107  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:12.168895  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:12.250390  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:12.317556  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:12.670191  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:12.670328  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:12.741369  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:12.817495  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:13.168844  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:13.170304  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:13.240980  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:13.318091  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:13.668441  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:13.670051  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:13.742082  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:13.817986  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:14.173489  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:14.173645  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:14.242539  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:14.319189  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:14.668470  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:14.668529  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:14.741484  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:14.819537  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:15.234834  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:15.235112  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:15.241853  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:15.475840  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:15.670937  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:15.670944  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:15.770404  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:15.819318  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:16.168311  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:16.168815  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:16.268442  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:16.317871  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:16.670271  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:16.671420  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:16.740955  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:16.818955  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:17.169084  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:17.169600  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:17.241974  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:17.317837  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:17.667875  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:17.668569  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:17.741066  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:17.817447  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:18.167555  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:18.167601  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:18.240558  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:18.317781  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:18.671701  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:18.673266  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:18.747093  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:18.817782  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:19.168688  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:19.168812  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:19.244494  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:19.318033  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:19.668615  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:19.670956  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:19.741967  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:19.820921  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:20.168306  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:20.168814  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:20.241404  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:20.317409  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:20.666839  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:20.667605  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:20.743243  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:20.842910  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:21.167175  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:21.169064  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:21.240615  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:21.317880  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:21.668688  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:21.668839  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:21.771335  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:21.816764  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:22.172848  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:22.174244  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:22.241170  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:22.319691  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:22.668636  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:22.668695  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:22.741606  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:22.820386  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:23.168096  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:23.168529  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:23.354342  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:23.356079  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:23.671307  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:23.673491  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:23.742958  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:23.818999  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:24.171400  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:24.171531  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:24.241832  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:24.319237  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:24.669766  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:24.673540  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:24.742922  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:24.818310  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:25.166965  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:25.167278  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:25.241817  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:25.318271  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:25.670547  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:25.674185  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:25.759583  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:25.819061  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:26.169302  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:26.169651  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:26.242196  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:26.318905  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:26.669758  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:26.670367  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:26.743625  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:26.819761  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:27.167813  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:27.171048  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:27.241457  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:27.319911  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:27.673341  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:27.674490  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:27.740949  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:27.821377  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:28.170165  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:28.170207  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:28.240783  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:28.318739  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:28.668057  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:28.668993  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:28.742131  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:28.820991  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:29.175642  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:29.175764  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:29.241944  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:29.322234  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:29.671231  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:29.678766  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:30.095164  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:30.095169  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:30.196824  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:30.198647  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:30.295997  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:30.320105  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:30.670383  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:30.671107  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:30.743942  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:30.820694  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:31.173169  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:31.175980  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:31.242341  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:31.318798  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:31.670617  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:31.670806  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:31.744902  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:31.822550  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:32.172962  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:32.173527  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:32.241612  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:32.318425  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:32.668955  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:32.668970  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:32.741259  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:32.818017  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:33.169071  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:33.170706  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:33.271279  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:33.317006  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:33.667390  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:33.668232  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:33.742738  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:33.817285  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:34.167761  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:34.167967  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:34.241535  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:34.317762  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:34.667630  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:34.668237  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:34.740551  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:34.818748  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:35.168385  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:35.168538  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:35.240834  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:35.318139  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:35.666831  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:35.667425  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:35.741739  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:35.842058  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:36.197356  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:36.197444  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:36.242625  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:36.318672  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:36.673290  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:36.673729  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:36.741634  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:36.821718  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:37.170769  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:37.170826  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:37.242704  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:37.318355  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:37.668390  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:37.668519  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:37.741588  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:37.818769  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:38.169024  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:38.169876  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:38.269622  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:38.318460  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:38.667811  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1202 19:46:38.667862  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:38.741414  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:38.817547  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:39.167513  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:39.167596  147999 kapi.go:107] duration metric: took 47.504234832s to wait for kubernetes.io/minikube-addons=registry ...
	I1202 19:46:39.241292  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:39.317444  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:39.666786  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:39.741560  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:39.819291  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:40.170088  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:40.271613  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:40.319289  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:40.669013  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:40.743464  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:40.820189  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:41.167424  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:41.240762  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:41.318439  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:41.668409  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:41.746055  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:41.817007  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:42.169554  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:42.241080  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:42.320995  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:42.667622  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:42.741736  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:42.817972  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:43.168177  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:43.240866  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:43.317877  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:43.669265  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:43.741372  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:43.817710  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:44.169187  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:44.242308  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:44.317695  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:44.668592  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:44.740604  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:44.818630  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:45.170336  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:45.243139  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:45.318177  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:45.669981  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:45.743171  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:45.818280  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:46.169888  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:46.269551  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:46.369264  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:46.672098  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:46.743345  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:46.817886  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:47.167125  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:47.240597  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:47.320325  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:47.668775  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:47.744526  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:47.818538  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:48.170548  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:48.269448  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:48.370076  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:48.668227  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:48.741707  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:48.817546  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:49.169061  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:49.270036  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:49.368872  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:49.668577  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:49.741427  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:49.818423  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:50.171750  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:50.241796  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:50.318038  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:50.668042  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:50.741297  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:50.817776  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:51.169580  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:51.243341  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:51.317194  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:51.670533  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:51.743390  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:51.818425  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:52.174707  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:52.241306  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:52.319501  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:52.668282  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:52.741122  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:52.821395  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:53.325740  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:53.325978  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:53.328943  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:53.669767  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:53.740745  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:53.819253  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:54.170028  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:54.241329  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:54.319801  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:54.671229  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:54.741848  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:54.820466  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:55.167732  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:55.241693  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:55.323069  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:55.669166  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:55.768694  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:55.820191  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:56.169535  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:56.241590  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:56.320415  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:56.671064  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:56.743093  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:56.820889  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:57.169998  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:57.270248  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:57.317290  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:57.669209  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:57.741148  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:57.821718  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:58.169830  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:58.242621  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1202 19:46:58.323506  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:58.671350  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:58.748140  147999 kapi.go:107] duration metric: took 1m6.011145761s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1202 19:46:58.824636  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:59.166958  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:59.319267  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:46:59.669683  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:46:59.823973  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:00.168386  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:00.319110  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:00.675550  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:00.820133  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:01.215341  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:01.319070  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:01.668686  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:01.819575  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:02.168872  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:02.318298  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:02.680598  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:02.817827  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:03.170133  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:03.323315  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:03.668630  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:03.817605  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:04.168236  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:04.317249  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:04.667749  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:04.820019  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:05.167845  147999 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1202 19:47:05.318031  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:05.668466  147999 kapi.go:107] duration metric: took 1m14.005089321s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1202 19:47:05.818310  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:06.318080  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:06.819466  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:07.318336  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:07.821135  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:08.318497  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:08.820835  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:09.321824  147999 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1202 19:47:09.821364  147999 kapi.go:107] duration metric: took 1m15.507497595s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1202 19:47:09.823252  147999 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-375150 cluster.
	I1202 19:47:09.824615  147999 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1202 19:47:09.825798  147999 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1202 19:47:09.827037  147999 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, ingress-dns, storage-provisioner, cloud-spanner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1202 19:47:09.828111  147999 addons.go:530] duration metric: took 1m27.485060007s for enable addons: enabled=[amd-gpu-device-plugin registry-creds ingress-dns storage-provisioner cloud-spanner nvidia-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1202 19:47:09.828168  147999 start.go:247] waiting for cluster config update ...
	I1202 19:47:09.828188  147999 start.go:256] writing updated cluster config ...
	I1202 19:47:09.828466  147999 ssh_runner.go:195] Run: rm -f paused
	I1202 19:47:09.840218  147999 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:47:09.844572  147999 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6m2rj" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:09.851612  147999 pod_ready.go:94] pod "coredns-66bc5c9577-6m2rj" is "Ready"
	I1202 19:47:09.851644  147999 pod_ready.go:86] duration metric: took 7.047052ms for pod "coredns-66bc5c9577-6m2rj" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:09.854269  147999 pod_ready.go:83] waiting for pod "etcd-addons-375150" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:09.859758  147999 pod_ready.go:94] pod "etcd-addons-375150" is "Ready"
	I1202 19:47:09.859779  147999 pod_ready.go:86] duration metric: took 5.484108ms for pod "etcd-addons-375150" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:09.862956  147999 pod_ready.go:83] waiting for pod "kube-apiserver-addons-375150" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:09.868840  147999 pod_ready.go:94] pod "kube-apiserver-addons-375150" is "Ready"
	I1202 19:47:09.868870  147999 pod_ready.go:86] duration metric: took 5.894729ms for pod "kube-apiserver-addons-375150" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:09.870703  147999 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-375150" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:10.245461  147999 pod_ready.go:94] pod "kube-controller-manager-addons-375150" is "Ready"
	I1202 19:47:10.245506  147999 pod_ready.go:86] duration metric: took 374.774986ms for pod "kube-controller-manager-addons-375150" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:10.449065  147999 pod_ready.go:83] waiting for pod "kube-proxy-djl9q" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:10.845632  147999 pod_ready.go:94] pod "kube-proxy-djl9q" is "Ready"
	I1202 19:47:10.845676  147999 pod_ready.go:86] duration metric: took 396.576783ms for pod "kube-proxy-djl9q" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:11.045722  147999 pod_ready.go:83] waiting for pod "kube-scheduler-addons-375150" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:11.445263  147999 pod_ready.go:94] pod "kube-scheduler-addons-375150" is "Ready"
	I1202 19:47:11.445294  147999 pod_ready.go:86] duration metric: took 399.539433ms for pod "kube-scheduler-addons-375150" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 19:47:11.445307  147999 pod_ready.go:40] duration metric: took 1.605034878s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 19:47:11.493360  147999 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 19:47:11.495259  147999 out.go:179] * Done! kubectl is now configured to use "addons-375150" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.952159534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764705025952132671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a2ed841-6f33-4949-8c62-b32bd51682ab name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.953130482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96117270-9251-4f72-887a-fe3e1898b21d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.953241676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96117270-9251-4f72-887a-fe3e1898b21d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.953894215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5075a8c5656f183c4adbb9b5bf98e87250c133be82edea5a12cf4b093425d5e,PodSandboxId:83d91d2daab9d8167d34f1e431f32d399d955f00272d0c8f03ef4477a7f25d44,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764704883639039455,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f120ae74-ee28-4d2d-8418-16f78d1e0320,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa0ad35c2e35348a13dc19ef3d5365399df984bf7bcc9f159a69734da520746,PodSandboxId:beb6fc04f49959a994d7b1c49597b2774f1f9f1603a6a92900a3c892f24217e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764704835900346365,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a82993f9-890f-42b3-b87f-75109bc29419,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92d1a000c624d302a7268a803abcaded799de1067dfdcb86078a644b456044cb,PodSandboxId:06aedd60144d6a54b6bf8a5ede4ba6b4151362054767b2c5a9c8f9ec733b4767,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764704824361018532,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-b2tlj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 894d2e76-6bee-47ff-a1c3-d1f8aab6607f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:552b15b68b32946e14090693036e660a3c7c38a07ec2710e8c8891e4f4f5eff2,PodSandboxId:fe395160ead79e4f0f4f42df5411cb06187c55d58b704d16cf95bd7c5837904c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764704809083384999,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tzl55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7cda6124-e9fa-47a3-b5b9-f7ecd4f57c92,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d971e396cff0e171825cf0d664922d08b2baacc0177ffe920836c33ff62f5bef,PodSandboxId:503d7b8e10f9d542c3596eb19c6a2f53fbf4a3a03b07ac78bc9878b53fea9994,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764704808954782475,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xcrr7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51611594-0cee-436d-b250-d4f6de45c4a3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e0b5fccdda788ff78ae45c212fa7f37e58221fa93d3c21d589764a6b05aaf8,PodSandboxId:e5a53db24f940a7c99ebe598d31edad34916c288e90d3f5b046b98425f5a2484,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764704776165005441,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba372f7-bafa-4796-83fd-614e28a2f517,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40ddb14912db9e2180fa7459590c78741df1db8a560c3f52f282094ddd5d5e5,PodSandboxId:31d00bf6f8559b281a393df0f27cfd549d6ba81a648590cefa2460511e58f0b6,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764704753943841874,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-rxk7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54b6cdd2-2cba-4f34-8f6e-97404e05daa0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f1032c42641877599da4c357142ffc0c9010c8938fd7796646f3a07f17ae2e,PodSandboxId:d56c7929ef2935436139202f4de420c166e2ec8d6687da4f162f4f3c6c9dbb6e,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764704750865943931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d54bf3-f7ed-4a33-9aa3-2fdffc9409ac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c777554c8c4cbeb3e24de445cbf0b8746379f05608dd8e07e5d63aad2232d5,PodSandboxId:ed070729b80de71e3849d0103204a15c4e4748d7e79aaddbd4a6842f411082c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764704743514498278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6m2rj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5baa3358-504a-44f5-a1b4-a4935763d8a1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:884e256195c971684fdfd57fae5a22e51b72efc9c488beefa7dbc1398053ccf8,PodSandboxId:0cc32fa1b13ae37afb15b2f08bb56c0960192bfc22b25802681b40b3e99d474e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764704742978494380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djl9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0561e2a2-6a79-4061-9d79-8184acaaf5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e798f50a1b9f0f8744b28b963e39197019aaa848700fb85aa20a34e87742875,PodSandboxId:637c7d6ed168507e3d0d50dbb5697e9a105e20fc123ca5165feca3b38563dea5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764704731234069655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a63df877054465a08199db682a7023,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58626d12754a34cee89d42aa8573932034dff3e980e1884f18513279ffb1c18,PodSandboxId:eba45e94d6daeb88a65da15b529231c73d530cf270927daa4601aa4c4951e515,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764704731224169252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6221a52d2f493409019ef09c06151c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a081b4db79b8b1f4737deb621b890afe65d89ba1067799e6832b7207c9e456,PodSandboxId:dc223e9475f2b7aceda7488ca38effa0f63a923cb423eaac1ba12900c0bebfea,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764704731197069561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b887b50b80251e832603c8
b5f93c52f8,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f929d4de928a1b6b0bb56ea0553ff029fee7f23fad64cfaa0cecd6a9cc09a73,PodSandboxId:7e38051d9c0857cce9e725e497710c3ab30dada3aea47da10827c46d4b758f84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764704731129695707,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b536d076f2e6785501960dc5a6fdecb,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96117270-9251-4f72-887a-fe3e1898b21d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.992584030Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f3da5ef-ad4d-497f-bd02-3d3f3a664c29 name=/runtime.v1.RuntimeService/Version
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.992681124Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f3da5ef-ad4d-497f-bd02-3d3f3a664c29 name=/runtime.v1.RuntimeService/Version
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.994239089Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3883c6b-0881-4a32-bf5f-d2104b09c4ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.996063361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764705025996030712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3883c6b-0881-4a32-bf5f-d2104b09c4ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.997010447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3a63418-bea2-42d5-aff6-0b726e534072 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.997521938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3a63418-bea2-42d5-aff6-0b726e534072 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:25 addons-375150 crio[818]: time="2025-12-02 19:50:25.998019752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5075a8c5656f183c4adbb9b5bf98e87250c133be82edea5a12cf4b093425d5e,PodSandboxId:83d91d2daab9d8167d34f1e431f32d399d955f00272d0c8f03ef4477a7f25d44,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764704883639039455,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f120ae74-ee28-4d2d-8418-16f78d1e0320,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa0ad35c2e35348a13dc19ef3d5365399df984bf7bcc9f159a69734da520746,PodSandboxId:beb6fc04f49959a994d7b1c49597b2774f1f9f1603a6a92900a3c892f24217e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764704835900346365,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a82993f9-890f-42b3-b87f-75109bc29419,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92d1a000c624d302a7268a803abcaded799de1067dfdcb86078a644b456044cb,PodSandboxId:06aedd60144d6a54b6bf8a5ede4ba6b4151362054767b2c5a9c8f9ec733b4767,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764704824361018532,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-b2tlj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 894d2e76-6bee-47ff-a1c3-d1f8aab6607f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:552b15b68b32946e14090693036e660a3c7c38a07ec2710e8c8891e4f4f5eff2,PodSandboxId:fe395160ead79e4f0f4f42df5411cb06187c55d58b704d16cf95bd7c5837904c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764704809083384999,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tzl55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7cda6124-e9fa-47a3-b5b9-f7ecd4f57c92,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d971e396cff0e171825cf0d664922d08b2baacc0177ffe920836c33ff62f5bef,PodSandboxId:503d7b8e10f9d542c3596eb19c6a2f53fbf4a3a03b07ac78bc9878b53fea9994,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764704808954782475,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xcrr7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51611594-0cee-436d-b250-d4f6de45c4a3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e0b5fccdda788ff78ae45c212fa7f37e58221fa93d3c21d589764a6b05aaf8,PodSandboxId:e5a53db24f940a7c99ebe598d31edad34916c288e90d3f5b046b98425f5a2484,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764704776165005441,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba372f7-bafa-4796-83fd-614e28a2f517,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40ddb14912db9e2180fa7459590c78741df1db8a560c3f52f282094ddd5d5e5,PodSandboxId:31d00bf6f8559b281a393df0f27cfd549d6ba81a648590cefa2460511e58f0b6,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764704753943841874,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-rxk7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54b6cdd2-2cba-4f34-8f6e-97404e05daa0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f1032c42641877599da4c357142ffc0c9010c8938fd7796646f3a07f17ae2e,PodSandboxId:d56c7929ef2935436139202f4de420c166e2ec8d6687da4f162f4f3c6c9dbb6e,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764704750865943931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d54bf3-f7ed-4a33-9aa3-2fdffc9409ac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c777554c8c4cbeb3e24de445cbf0b8746379f05608dd8e07e5d63aad2232d5,PodSandboxId:ed070729b80de71e3849d0103204a15c4e4748d7e79aaddbd4a6842f411082c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764704743514498278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6m2rj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5baa3358-504a-44f5-a1b4-a4935763d8a1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:884e256195c971684fdfd57fae5a22e51b72efc9c488beefa7dbc1398053ccf8,PodSandboxId:0cc32fa1b13ae37afb15b2f08bb56c0960192bfc22b25802681b40b3e99d474e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764704742978494380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djl9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0561e2a2-6a79-4061-9d79-8184acaaf5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e798f50a1b9f0f8744b28b963e39197019aaa848700fb85aa20a34e87742875,PodSandboxId:637c7d6ed168507e3d0d50dbb5697e9a105e20fc123ca5165feca3b38563dea5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764704731234069655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a63df877054465a08199db682a7023,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58626d12754a34cee89d42aa8573932034dff3e980e1884f18513279ffb1c18,PodSandboxId:eba45e94d6daeb88a65da15b529231c73d530cf270927daa4601aa4c4951e515,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764704731224169252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6221a52d2f493409019ef09c06151c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a081b4db79b8b1f4737deb621b890afe65d89ba1067799e6832b7207c9e456,PodSandboxId:dc223e9475f2b7aceda7488ca38effa0f63a923cb423eaac1ba12900c0bebfea,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764704731197069561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b887b50b80251e832603c8
b5f93c52f8,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f929d4de928a1b6b0bb56ea0553ff029fee7f23fad64cfaa0cecd6a9cc09a73,PodSandboxId:7e38051d9c0857cce9e725e497710c3ab30dada3aea47da10827c46d4b758f84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764704731129695707,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b536d076f2e6785501960dc5a6fdecb,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3a63418-bea2-42d5-aff6-0b726e534072 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.033804797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f5ba690-7113-452b-94ed-cafac1eefd35 name=/runtime.v1.RuntimeService/Version
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.033895223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f5ba690-7113-452b-94ed-cafac1eefd35 name=/runtime.v1.RuntimeService/Version
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.036066124Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79a4b939-1f08-4696-bca4-0ca762d13003 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.037283150Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764705026037252042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79a4b939-1f08-4696-bca4-0ca762d13003 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.038400471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a722189a-b3aa-4a6b-9bbd-18c74b18e9a3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.038599893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a722189a-b3aa-4a6b-9bbd-18c74b18e9a3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.038974058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5075a8c5656f183c4adbb9b5bf98e87250c133be82edea5a12cf4b093425d5e,PodSandboxId:83d91d2daab9d8167d34f1e431f32d399d955f00272d0c8f03ef4477a7f25d44,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764704883639039455,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f120ae74-ee28-4d2d-8418-16f78d1e0320,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa0ad35c2e35348a13dc19ef3d5365399df984bf7bcc9f159a69734da520746,PodSandboxId:beb6fc04f49959a994d7b1c49597b2774f1f9f1603a6a92900a3c892f24217e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764704835900346365,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a82993f9-890f-42b3-b87f-75109bc29419,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92d1a000c624d302a7268a803abcaded799de1067dfdcb86078a644b456044cb,PodSandboxId:06aedd60144d6a54b6bf8a5ede4ba6b4151362054767b2c5a9c8f9ec733b4767,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764704824361018532,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-b2tlj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 894d2e76-6bee-47ff-a1c3-d1f8aab6607f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:552b15b68b32946e14090693036e660a3c7c38a07ec2710e8c8891e4f4f5eff2,PodSandboxId:fe395160ead79e4f0f4f42df5411cb06187c55d58b704d16cf95bd7c5837904c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764704809083384999,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tzl55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7cda6124-e9fa-47a3-b5b9-f7ecd4f57c92,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d971e396cff0e171825cf0d664922d08b2baacc0177ffe920836c33ff62f5bef,PodSandboxId:503d7b8e10f9d542c3596eb19c6a2f53fbf4a3a03b07ac78bc9878b53fea9994,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764704808954782475,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xcrr7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51611594-0cee-436d-b250-d4f6de45c4a3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e0b5fccdda788ff78ae45c212fa7f37e58221fa93d3c21d589764a6b05aaf8,PodSandboxId:e5a53db24f940a7c99ebe598d31edad34916c288e90d3f5b046b98425f5a2484,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764704776165005441,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba372f7-bafa-4796-83fd-614e28a2f517,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40ddb14912db9e2180fa7459590c78741df1db8a560c3f52f282094ddd5d5e5,PodSandboxId:31d00bf6f8559b281a393df0f27cfd549d6ba81a648590cefa2460511e58f0b6,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764704753943841874,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-rxk7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54b6cdd2-2cba-4f34-8f6e-97404e05daa0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f1032c42641877599da4c357142ffc0c9010c8938fd7796646f3a07f17ae2e,PodSandboxId:d56c7929ef2935436139202f4de420c166e2ec8d6687da4f162f4f3c6c9dbb6e,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764704750865943931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d54bf3-f7ed-4a33-9aa3-2fdffc9409ac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c777554c8c4cbeb3e24de445cbf0b8746379f05608dd8e07e5d63aad2232d5,PodSandboxId:ed070729b80de71e3849d0103204a15c4e4748d7e79aaddbd4a6842f411082c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764704743514498278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6m2rj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5baa3358-504a-44f5-a1b4-a4935763d8a1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:884e256195c971684fdfd57fae5a22e51b72efc9c488beefa7dbc1398053ccf8,PodSandboxId:0cc32fa1b13ae37afb15b2f08bb56c0960192bfc22b25802681b40b3e99d474e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764704742978494380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djl9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0561e2a2-6a79-4061-9d79-8184acaaf5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e798f50a1b9f0f8744b28b963e39197019aaa848700fb85aa20a34e87742875,PodSandboxId:637c7d6ed168507e3d0d50dbb5697e9a105e20fc123ca5165feca3b38563dea5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764704731234069655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a63df877054465a08199db682a7023,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58626d12754a34cee89d42aa8573932034dff3e980e1884f18513279ffb1c18,PodSandboxId:eba45e94d6daeb88a65da15b529231c73d530cf270927daa4601aa4c4951e515,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764704731224169252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6221a52d2f493409019ef09c06151c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a081b4db79b8b1f4737deb621b890afe65d89ba1067799e6832b7207c9e456,PodSandboxId:dc223e9475f2b7aceda7488ca38effa0f63a923cb423eaac1ba12900c0bebfea,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764704731197069561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b887b50b80251e832603c8
b5f93c52f8,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f929d4de928a1b6b0bb56ea0553ff029fee7f23fad64cfaa0cecd6a9cc09a73,PodSandboxId:7e38051d9c0857cce9e725e497710c3ab30dada3aea47da10827c46d4b758f84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764704731129695707,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b536d076f2e6785501960dc5a6fdecb,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a722189a-b3aa-4a6b-9bbd-18c74b18e9a3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.072748530Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2d3d183-660d-4ef1-b49f-b64617f6f7dd name=/runtime.v1.RuntimeService/Version
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.072922218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2d3d183-660d-4ef1-b49f-b64617f6f7dd name=/runtime.v1.RuntimeService/Version
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.074330871Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73e2ef9b-90d1-49b1-b41d-5c3d7370cd73 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.076223823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764705026076193342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73e2ef9b-90d1-49b1-b41d-5c3d7370cd73 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.077336988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=126b815f-7ac1-4907-99fe-dde489b03ab0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.077422823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=126b815f-7ac1-4907-99fe-dde489b03ab0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 19:50:26 addons-375150 crio[818]: time="2025-12-02 19:50:26.077808820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f5075a8c5656f183c4adbb9b5bf98e87250c133be82edea5a12cf4b093425d5e,PodSandboxId:83d91d2daab9d8167d34f1e431f32d399d955f00272d0c8f03ef4477a7f25d44,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764704883639039455,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f120ae74-ee28-4d2d-8418-16f78d1e0320,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa0ad35c2e35348a13dc19ef3d5365399df984bf7bcc9f159a69734da520746,PodSandboxId:beb6fc04f49959a994d7b1c49597b2774f1f9f1603a6a92900a3c892f24217e3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764704835900346365,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a82993f9-890f-42b3-b87f-75109bc29419,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92d1a000c624d302a7268a803abcaded799de1067dfdcb86078a644b456044cb,PodSandboxId:06aedd60144d6a54b6bf8a5ede4ba6b4151362054767b2c5a9c8f9ec733b4767,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764704824361018532,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-b2tlj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 894d2e76-6bee-47ff-a1c3-d1f8aab6607f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:552b15b68b32946e14090693036e660a3c7c38a07ec2710e8c8891e4f4f5eff2,PodSandboxId:fe395160ead79e4f0f4f42df5411cb06187c55d58b704d16cf95bd7c5837904c,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764704809083384999,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-tzl55,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7cda6124-e9fa-47a3-b5b9-f7ecd4f57c92,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d971e396cff0e171825cf0d664922d08b2baacc0177ffe920836c33ff62f5bef,PodSandboxId:503d7b8e10f9d542c3596eb19c6a2f53fbf4a3a03b07ac78bc9878b53fea9994,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764704808954782475,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xcrr7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51611594-0cee-436d-b250-d4f6de45c4a3,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16e0b5fccdda788ff78ae45c212fa7f37e58221fa93d3c21d589764a6b05aaf8,PodSandboxId:e5a53db24f940a7c99ebe598d31edad34916c288e90d3f5b046b98425f5a2484,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764704776165005441,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bba372f7-bafa-4796-83fd-614e28a2f517,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b40ddb14912db9e2180fa7459590c78741df1db8a560c3f52f282094ddd5d5e5,PodSandboxId:31d00bf6f8559b281a393df0f27cfd549d6ba81a648590cefa2460511e58f0b6,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764704753943841874,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-rxk7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54b6cdd2-2cba-4f34-8f6e-97404e05daa0,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:21f1032c42641877599da4c357142ffc0c9010c8938fd7796646f3a07f17ae2e,PodSandboxId:d56c7929ef2935436139202f4de420c166e2ec8d6687da4f162f4f3c6c9dbb6e,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764704750865943931,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d54bf3-f7ed-4a33-9aa3-2fdffc9409ac,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c777554c8c4cbeb3e24de445cbf0b8746379f05608dd8e07e5d63aad2232d5,PodSandboxId:ed070729b80de71e3849d0103204a15c4e4748d7e79aaddbd4a6842f411082c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764704743514498278,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6m2rj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5baa3358-504a-44f5-a1b4-a4935763d8a1,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:884e256195c971684fdfd57fae5a22e51b72efc9c488beefa7dbc1398053ccf8,PodSandboxId:0cc32fa1b13ae37afb15b2f08bb56c0960192bfc22b25802681b40b3e99d474e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764704742978494380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-djl9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0561e2a2-6a79-4061-9d79-8184acaaf5a9,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e798f50a1b9f0f8744b28b963e39197019aaa848700fb85aa20a34e87742875,PodSandboxId:637c7d6ed168507e3d0d50dbb5697e9a105e20fc123ca5165feca3b38563dea5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764704731234069655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47a63df877054465a08199db682a7023,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d58626d12754a34cee89d42aa8573932034dff3e980e1884f18513279ffb1c18,PodSandboxId:eba45e94d6daeb88a65da15b529231c73d530cf270927daa4601aa4c4951e515,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764704731224169252,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6221a52d2f493409019ef09c06151c1d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a081b4db79b8b1f4737deb621b890afe65d89ba1067799e6832b7207c9e456,PodSandboxId:dc223e9475f2b7aceda7488ca38effa0f63a923cb423eaac1ba12900c0bebfea,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764704731197069561,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b887b50b80251e832603c8
b5f93c52f8,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f929d4de928a1b6b0bb56ea0553ff029fee7f23fad64cfaa0cecd6a9cc09a73,PodSandboxId:7e38051d9c0857cce9e725e497710c3ab30dada3aea47da10827c46d4b758f84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764704731129695707,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-375150,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b536d076f2e6785501960dc5a6fdecb,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=126b815f-7ac1-4907-99fe-dde489b03ab0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	f5075a8c5656f       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   83d91d2daab9d       nginx                                      default
	cfa0ad35c2e35       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   beb6fc04f4995       busybox                                    default
	92d1a000c624d       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   06aedd60144d6       ingress-nginx-controller-6c8bf45fb-b2tlj   ingress-nginx
	552b15b68b329       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              patch                     0                   fe395160ead79       ingress-nginx-admission-patch-tzl55        ingress-nginx
	d971e396cff0e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   503d7b8e10f9d       ingress-nginx-admission-create-xcrr7       ingress-nginx
	16e0b5fccdda7       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   e5a53db24f940       kube-ingress-dns-minikube                  kube-system
	b40ddb14912db       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   31d00bf6f8559       amd-gpu-device-plugin-rxk7z                kube-system
	21f1032c42641       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   d56c7929ef293       storage-provisioner                        kube-system
	25c777554c8c4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   ed070729b80de       coredns-66bc5c9577-6m2rj                   kube-system
	884e256195c97       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   0cc32fa1b13ae       kube-proxy-djl9q                           kube-system
	7e798f50a1b9f       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   637c7d6ed1685       kube-scheduler-addons-375150               kube-system
	d58626d12754a       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   eba45e94d6dae       kube-controller-manager-addons-375150      kube-system
	74a081b4db79b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   dc223e9475f2b       etcd-addons-375150                         kube-system
	3f929d4de928a       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   7e38051d9c085       kube-apiserver-addons-375150               kube-system
	
	
	==> coredns [25c777554c8c4cbeb3e24de445cbf0b8746379f05608dd8e07e5d63aad2232d5] <==
	[INFO] 10.244.0.8:44683 - 37316 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000130563s
	[INFO] 10.244.0.8:44683 - 34969 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00023271s
	[INFO] 10.244.0.8:44683 - 56836 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000281461s
	[INFO] 10.244.0.8:44683 - 22604 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000161763s
	[INFO] 10.244.0.8:44683 - 8230 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000417764s
	[INFO] 10.244.0.8:44683 - 63652 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000288546s
	[INFO] 10.244.0.8:44683 - 62 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000082227s
	[INFO] 10.244.0.8:37441 - 15776 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000221463s
	[INFO] 10.244.0.8:37441 - 16085 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000269931s
	[INFO] 10.244.0.8:36333 - 48286 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065099s
	[INFO] 10.244.0.8:36333 - 48565 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000097711s
	[INFO] 10.244.0.8:48596 - 27146 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000123554s
	[INFO] 10.244.0.8:48596 - 27637 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000329667s
	[INFO] 10.244.0.8:36553 - 64008 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000073545s
	[INFO] 10.244.0.8:36553 - 64410 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000087915s
	[INFO] 10.244.0.23:56525 - 4714 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000541647s
	[INFO] 10.244.0.23:37967 - 13780 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164794s
	[INFO] 10.244.0.23:52597 - 54366 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111289s
	[INFO] 10.244.0.23:57624 - 12652 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012732s
	[INFO] 10.244.0.23:36957 - 22607 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000325697s
	[INFO] 10.244.0.23:52470 - 5315 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135071s
	[INFO] 10.244.0.23:32830 - 17882 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000995067s
	[INFO] 10.244.0.23:56752 - 39803 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.00287805s
	[INFO] 10.244.0.27:48609 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000298619s
	[INFO] 10.244.0.27:49980 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000182556s
	
	
	==> describe nodes <==
	Name:               addons-375150
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-375150
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=addons-375150
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T19_45_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-375150
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 19:45:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-375150
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 19:50:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 19:48:10 +0000   Tue, 02 Dec 2025 19:45:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 19:48:10 +0000   Tue, 02 Dec 2025 19:45:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 19:48:10 +0000   Tue, 02 Dec 2025 19:45:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 19:48:10 +0000   Tue, 02 Dec 2025 19:45:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    addons-375150
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 d624609594f34cee908b7809c660f726
	  System UUID:                d6246095-94f3-4cee-908b-7809c660f726
	  Boot ID:                    83089e30-ad3b-435b-9bbd-275e18ff148e
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     hello-world-app-5d498dc89-p9m4h             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-b2tlj    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m35s
	  kube-system                 amd-gpu-device-plugin-rxk7z                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 coredns-66bc5c9577-6m2rj                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m44s
	  kube-system                 etcd-addons-375150                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m51s
	  kube-system                 kube-apiserver-addons-375150                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-controller-manager-addons-375150       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-proxy-djl9q                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-scheduler-addons-375150                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m41s  kube-proxy       
	  Normal  Starting                 4m50s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m50s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m49s  kubelet          Node addons-375150 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s  kubelet          Node addons-375150 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s  kubelet          Node addons-375150 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m49s  kubelet          Node addons-375150 status is now: NodeReady
	  Normal  RegisteredNode           4m46s  node-controller  Node addons-375150 event: Registered Node addons-375150 in Controller
	
	
	==> dmesg <==
	[  +0.059238] kauditd_printk_skb: 300 callbacks suppressed
	[  +0.938737] kauditd_printk_skb: 402 callbacks suppressed
	[Dec 2 19:46] kauditd_printk_skb: 301 callbacks suppressed
	[  +5.851198] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.447542] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.395463] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.611355] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.692602] kauditd_printk_skb: 107 callbacks suppressed
	[  +5.287110] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.240726] kauditd_printk_skb: 95 callbacks suppressed
	[  +1.800204] kauditd_printk_skb: 154 callbacks suppressed
	[Dec 2 19:47] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.089202] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.474789] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.528803] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.879545] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.629905] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.051159] kauditd_printk_skb: 120 callbacks suppressed
	[  +1.807445] kauditd_printk_skb: 155 callbacks suppressed
	[  +0.034532] kauditd_printk_skb: 216 callbacks suppressed
	[Dec 2 19:48] kauditd_printk_skb: 70 callbacks suppressed
	[  +0.000985] kauditd_printk_skb: 56 callbacks suppressed
	[  +6.861443] kauditd_printk_skb: 41 callbacks suppressed
	[  +3.035772] kauditd_printk_skb: 127 callbacks suppressed
	[Dec 2 19:50] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [74a081b4db79b8b1f4737deb621b890afe65d89ba1067799e6832b7207c9e456] <==
	{"level":"info","ts":"2025-12-02T19:47:02.672836Z","caller":"traceutil/trace.go:172","msg":"trace[1441948442] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"236.51385ms","start":"2025-12-02T19:47:02.436306Z","end":"2025-12-02T19:47:02.672820Z","steps":["trace[1441948442] 'process raft request'  (duration: 236.408281ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T19:47:02.672903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.113429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2025-12-02T19:47:02.672936Z","caller":"traceutil/trace.go:172","msg":"trace[1258626868] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1128; }","duration":"107.14841ms","start":"2025-12-02T19:47:02.565770Z","end":"2025-12-02T19:47:02.672918Z","steps":["trace[1258626868] 'agreement among raft nodes before linearized reading'  (duration: 107.045278ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:47:03.549346Z","caller":"traceutil/trace.go:172","msg":"trace[1164997636] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"150.94525ms","start":"2025-12-02T19:47:03.398385Z","end":"2025-12-02T19:47:03.549330Z","steps":["trace[1164997636] 'process raft request'  (duration: 150.862024ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:47:36.807975Z","caller":"traceutil/trace.go:172","msg":"trace[1731234685] linearizableReadLoop","detail":"{readStateIndex:1356; appliedIndex:1356; }","duration":"120.831083ms","start":"2025-12-02T19:47:36.687128Z","end":"2025-12-02T19:47:36.807959Z","steps":["trace[1731234685] 'read index received'  (duration: 120.825433ms)","trace[1731234685] 'applied index is now lower than readState.Index'  (duration: 5.085µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T19:47:36.808179Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.039255ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:47:36.808204Z","caller":"traceutil/trace.go:172","msg":"trace[1262635174] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1314; }","duration":"121.075201ms","start":"2025-12-02T19:47:36.687119Z","end":"2025-12-02T19:47:36.808194Z","steps":["trace[1262635174] 'agreement among raft nodes before linearized reading'  (duration: 121.018052ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T19:47:36.808670Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.265344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:47:36.808754Z","caller":"traceutil/trace.go:172","msg":"trace[908867896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1315; }","duration":"104.354193ms","start":"2025-12-02T19:47:36.704391Z","end":"2025-12-02T19:47:36.808745Z","steps":["trace[908867896] 'agreement among raft nodes before linearized reading'  (duration: 104.246516ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:47:36.808963Z","caller":"traceutil/trace.go:172","msg":"trace[1364251653] transaction","detail":"{read_only:false; response_revision:1315; number_of_response:1; }","duration":"189.971616ms","start":"2025-12-02T19:47:36.618983Z","end":"2025-12-02T19:47:36.808955Z","steps":["trace[1364251653] 'process raft request'  (duration: 189.523294ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:47:38.136130Z","caller":"traceutil/trace.go:172","msg":"trace[1007759357] linearizableReadLoop","detail":"{readStateIndex:1382; appliedIndex:1382; }","duration":"182.234987ms","start":"2025-12-02T19:47:37.953873Z","end":"2025-12-02T19:47:38.136108Z","steps":["trace[1007759357] 'read index received'  (duration: 182.229255ms)","trace[1007759357] 'applied index is now lower than readState.Index'  (duration: 4.767µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T19:47:38.136272Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.376009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deviceclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:47:38.136299Z","caller":"traceutil/trace.go:172","msg":"trace[2074730743] range","detail":"{range_begin:/registry/deviceclasses; range_end:; response_count:0; response_revision:1339; }","duration":"182.42155ms","start":"2025-12-02T19:47:37.953868Z","end":"2025-12-02T19:47:38.136290Z","steps":["trace[2074730743] 'agreement among raft nodes before linearized reading'  (duration: 182.342241ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T19:47:38.136789Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.624874ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:47:38.136822Z","caller":"traceutil/trace.go:172","msg":"trace[1922227072] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1340; }","duration":"141.665823ms","start":"2025-12-02T19:47:37.995149Z","end":"2025-12-02T19:47:38.136814Z","steps":["trace[1922227072] 'agreement among raft nodes before linearized reading'  (duration: 141.610125ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:47:38.137026Z","caller":"traceutil/trace.go:172","msg":"trace[222931971] transaction","detail":"{read_only:false; response_revision:1340; number_of_response:1; }","duration":"240.759012ms","start":"2025-12-02T19:47:37.896259Z","end":"2025-12-02T19:47:38.137019Z","steps":["trace[222931971] 'process raft request'  (duration: 240.353345ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T19:47:38.137108Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.375038ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:47:38.137124Z","caller":"traceutil/trace.go:172","msg":"trace[855131691] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:0; response_revision:1340; }","duration":"105.3919ms","start":"2025-12-02T19:47:38.031726Z","end":"2025-12-02T19:47:38.137118Z","steps":["trace[855131691] 'agreement among raft nodes before linearized reading'  (duration: 105.362374ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:47:57.864499Z","caller":"traceutil/trace.go:172","msg":"trace[69961083] linearizableReadLoop","detail":"{readStateIndex:1624; appliedIndex:1624; }","duration":"177.216224ms","start":"2025-12-02T19:47:57.687260Z","end":"2025-12-02T19:47:57.864476Z","steps":["trace[69961083] 'read index received'  (duration: 177.211534ms)","trace[69961083] 'applied index is now lower than readState.Index'  (duration: 4.014µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-02T19:47:57.864860Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.547996ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:47:57.865025Z","caller":"traceutil/trace.go:172","msg":"trace[138563473] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1565; }","duration":"177.762746ms","start":"2025-12-02T19:47:57.687255Z","end":"2025-12-02T19:47:57.865018Z","steps":["trace[138563473] 'agreement among raft nodes before linearized reading'  (duration: 177.518428ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-02T19:47:57.864910Z","caller":"traceutil/trace.go:172","msg":"trace[1254536845] transaction","detail":"{read_only:false; response_revision:1566; number_of_response:1; }","duration":"343.036013ms","start":"2025-12-02T19:47:57.521863Z","end":"2025-12-02T19:47:57.864899Z","steps":["trace[1254536845] 'process raft request'  (duration: 342.951915ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-02T19:47:57.865229Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-02T19:47:57.521847Z","time spent":"343.280228ms","remote":"127.0.0.1:40034","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2297,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/namespaces/yakd-dashboard\" mod_revision:1489 > success:<request_put:<key:\"/registry/namespaces/yakd-dashboard\" value_size:2254 >> failure:<request_range:<key:\"/registry/namespaces/yakd-dashboard\" > >"}
	{"level":"warn","ts":"2025-12-02T19:48:23.571972Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"174.538264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-02T19:48:23.572117Z","caller":"traceutil/trace.go:172","msg":"trace[562872488] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1817; }","duration":"174.739356ms","start":"2025-12-02T19:48:23.397364Z","end":"2025-12-02T19:48:23.572103Z","steps":["trace[562872488] 'range keys from in-memory index tree'  (duration: 174.475583ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:50:26 up 5 min,  0 users,  load average: 0.36, 1.22, 0.68
	Linux addons-375150 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3f929d4de928a1b6b0bb56ea0553ff029fee7f23fad64cfaa0cecd6a9cc09a73] <==
	E1202 19:46:30.144720       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.75.250:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.75.250:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.75.250:443: connect: connection refused" logger="UnhandledError"
	E1202 19:46:30.220831       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1202 19:46:30.265577       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1202 19:47:23.285404       1 conn.go:339] Error on socket receive: read tcp 192.168.39.62:8443->192.168.39.1:36332: use of closed network connection
	E1202 19:47:23.471032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.62:8443->192.168.39.1:36362: use of closed network connection
	I1202 19:47:32.593062       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.236.151"}
	I1202 19:47:52.855958       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1202 19:47:53.033647       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.244.9"}
	I1202 19:48:05.412013       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1202 19:48:09.840647       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1202 19:48:19.927204       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 19:48:19.927388       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 19:48:19.961078       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 19:48:19.961117       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 19:48:19.966181       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 19:48:19.966224       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 19:48:19.993261       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 19:48:19.993327       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1202 19:48:20.033828       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1202 19:48:20.033863       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1202 19:48:20.966506       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1202 19:48:21.035362       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1202 19:48:21.228250       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1202 19:48:31.171807       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1202 19:50:24.911400       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.144.36"}
	
	
	==> kube-controller-manager [d58626d12754a34cee89d42aa8573932034dff3e980e1884f18513279ffb1c18] <==
	E1202 19:48:37.775721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1202 19:48:38.301676       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:48:38.302956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1202 19:48:41.949184       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:48:41.950249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1202 19:48:42.106752       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1202 19:48:42.106783       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1202 19:48:42.153464       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1202 19:48:42.153537       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1202 19:48:55.840003       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:48:55.841188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1202 19:49:00.975734       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:49:00.976958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1202 19:49:02.307898       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:49:02.309143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1202 19:49:30.374119       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:49:30.375640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1202 19:49:30.377728       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:49:30.378949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1202 19:49:43.770050       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:49:43.771108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1202 19:50:04.030366       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:50:04.031646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1202 19:50:14.733929       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1202 19:50:14.735096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [884e256195c971684fdfd57fae5a22e51b72efc9c488beefa7dbc1398053ccf8] <==
	I1202 19:45:43.927597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 19:45:44.028222       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 19:45:44.029095       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.62"]
	E1202 19:45:44.034422       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 19:45:44.298809       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1202 19:45:44.298864       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 19:45:44.298892       1 server_linux.go:132] "Using iptables Proxier"
	I1202 19:45:44.333679       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 19:45:44.335729       1 server.go:527] "Version info" version="v1.34.2"
	I1202 19:45:44.335747       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 19:45:44.342010       1 config.go:200] "Starting service config controller"
	I1202 19:45:44.342041       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 19:45:44.342073       1 config.go:106] "Starting endpoint slice config controller"
	I1202 19:45:44.342077       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 19:45:44.342088       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 19:45:44.342091       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 19:45:44.348711       1 config.go:309] "Starting node config controller"
	I1202 19:45:44.350862       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 19:45:44.351097       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 19:45:44.442960       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 19:45:44.442998       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1202 19:45:44.443036       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7e798f50a1b9f0f8744b28b963e39197019aaa848700fb85aa20a34e87742875] <==
	E1202 19:45:34.032954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 19:45:34.033024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 19:45:34.035624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 19:45:34.035693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 19:45:34.035757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 19:45:34.035873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 19:45:34.035957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 19:45:34.036090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 19:45:34.036623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 19:45:34.036776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 19:45:34.917290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 19:45:34.946379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 19:45:34.950911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 19:45:34.981985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 19:45:34.997958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1202 19:45:35.004355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 19:45:35.037961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 19:45:35.063137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 19:45:35.098775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 19:45:35.185332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 19:45:35.229923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 19:45:35.235512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 19:45:35.244305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 19:45:35.264014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1202 19:45:38.224196       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 19:48:39 addons-375150 kubelet[1504]: I1202 19:48:39.646097    1504 scope.go:117] "RemoveContainer" containerID="d69f881c8eb11f67caeed6952f3fd26118c129181a775af2eb5965644237e045"
	Dec 02 19:48:39 addons-375150 kubelet[1504]: I1202 19:48:39.770701    1504 scope.go:117] "RemoveContainer" containerID="33a87227f6373c80991266a06833b6852320a7cbaa6991b0faf668e50f6ea244"
	Dec 02 19:48:47 addons-375150 kubelet[1504]: E1202 19:48:47.042839    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764704927042404772 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:48:47 addons-375150 kubelet[1504]: E1202 19:48:47.042871    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764704927042404772 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:48:57 addons-375150 kubelet[1504]: E1202 19:48:57.045402    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764704937044979253 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:48:57 addons-375150 kubelet[1504]: E1202 19:48:57.045471    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764704937044979253 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:07 addons-375150 kubelet[1504]: E1202 19:49:07.049499    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764704947048781828 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:07 addons-375150 kubelet[1504]: E1202 19:49:07.049528    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764704947048781828 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:17 addons-375150 kubelet[1504]: E1202 19:49:17.053175    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764704957052475987 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:17 addons-375150 kubelet[1504]: E1202 19:49:17.053221    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764704957052475987 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:27 addons-375150 kubelet[1504]: E1202 19:49:27.057411    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764704967056759748 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:27 addons-375150 kubelet[1504]: E1202 19:49:27.057520    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764704967056759748 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:27 addons-375150 kubelet[1504]: I1202 19:49:27.871560    1504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:49:37 addons-375150 kubelet[1504]: E1202 19:49:37.060781    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764704977060381163 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:37 addons-375150 kubelet[1504]: E1202 19:49:37.060807    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764704977060381163 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:47 addons-375150 kubelet[1504]: E1202 19:49:47.063595    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764704987063194985 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:47 addons-375150 kubelet[1504]: E1202 19:49:47.063622    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764704987063194985 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:57 addons-375150 kubelet[1504]: E1202 19:49:57.067997    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764704997067408660 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:57 addons-375150 kubelet[1504]: E1202 19:49:57.068052    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764704997067408660 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:49:58 addons-375150 kubelet[1504]: I1202 19:49:58.871301    1504 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-rxk7z" secret="" err="secret \"gcp-auth\" not found"
	Dec 02 19:50:07 addons-375150 kubelet[1504]: E1202 19:50:07.070936    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764705007070352939 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:50:07 addons-375150 kubelet[1504]: E1202 19:50:07.070967    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764705007070352939 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:50:17 addons-375150 kubelet[1504]: E1202 19:50:17.073800    1504 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764705017073290175 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:50:17 addons-375150 kubelet[1504]: E1202 19:50:17.073836    1504 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764705017073290175 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 02 19:50:24 addons-375150 kubelet[1504]: I1202 19:50:24.874750    1504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxq6p\" (UniqueName: \"kubernetes.io/projected/071423d6-f0f4-4afc-857d-269611e5f578-kube-api-access-fxq6p\") pod \"hello-world-app-5d498dc89-p9m4h\" (UID: \"071423d6-f0f4-4afc-857d-269611e5f578\") " pod="default/hello-world-app-5d498dc89-p9m4h"
	
	
	==> storage-provisioner [21f1032c42641877599da4c357142ffc0c9010c8938fd7796646f3a07f17ae2e] <==
	W1202 19:50:01.808051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:03.812146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:03.818257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:05.821743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:05.827247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:07.830790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:07.839364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:09.843026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:09.849256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:11.852743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:11.860884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:13.864422       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:13.871009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:15.874913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:15.880587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:17.884372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:17.889281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:19.892302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:19.900167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:21.904170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:21.909809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:23.913795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:23.921581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:25.925287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1202 19:50:25.932173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-375150 -n addons-375150
helpers_test.go:269: (dbg) Run:  kubectl --context addons-375150 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-p9m4h ingress-nginx-admission-create-xcrr7 ingress-nginx-admission-patch-tzl55
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-375150 describe pod hello-world-app-5d498dc89-p9m4h ingress-nginx-admission-create-xcrr7 ingress-nginx-admission-patch-tzl55
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-375150 describe pod hello-world-app-5d498dc89-p9m4h ingress-nginx-admission-create-xcrr7 ingress-nginx-admission-patch-tzl55: exit status 1 (71.904592ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-p9m4h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-375150/192.168.39.62
	Start Time:       Tue, 02 Dec 2025 19:50:24 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fxq6p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fxq6p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-p9m4h to addons-375150
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xcrr7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tzl55" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-375150 describe pod hello-world-app-5d498dc89-p9m4h ingress-nginx-admission-create-xcrr7 ingress-nginx-admission-patch-tzl55: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-375150 addons disable ingress-dns --alsologtostderr -v=1: (1.116524431s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-375150 addons disable ingress --alsologtostderr -v=1: (7.699372303s)
--- FAIL: TestAddons/parallel/Ingress (163.34s)

                                                
                                    
x
+
TestPreload (113.69s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-234694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1202 20:37:50.828294  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-234694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (59.732372004s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-234694 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-234694 image pull gcr.io/k8s-minikube/busybox: (3.536884288s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-234694
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-234694: (6.897652949s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-234694 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-234694 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (40.855942105s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-234694 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-02 20:39:38.96489053 +0000 UTC m=+3317.801870316
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-234694 -n test-preload-234694
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-234694 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-234694 logs -n 25: (1.018470027s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-519659 ssh -n multinode-519659-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:27 UTC │ 02 Dec 25 20:27 UTC │
	│ ssh     │ multinode-519659 ssh -n multinode-519659 sudo cat /home/docker/cp-test_multinode-519659-m03_multinode-519659.txt                                          │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:27 UTC │ 02 Dec 25 20:27 UTC │
	│ cp      │ multinode-519659 cp multinode-519659-m03:/home/docker/cp-test.txt multinode-519659-m02:/home/docker/cp-test_multinode-519659-m03_multinode-519659-m02.txt │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:27 UTC │ 02 Dec 25 20:27 UTC │
	│ ssh     │ multinode-519659 ssh -n multinode-519659-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:27 UTC │ 02 Dec 25 20:27 UTC │
	│ ssh     │ multinode-519659 ssh -n multinode-519659-m02 sudo cat /home/docker/cp-test_multinode-519659-m03_multinode-519659-m02.txt                                  │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:27 UTC │ 02 Dec 25 20:27 UTC │
	│ node    │ multinode-519659 node stop m03                                                                                                                            │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:27 UTC │ 02 Dec 25 20:27 UTC │
	│ node    │ multinode-519659 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:27 UTC │ 02 Dec 25 20:27 UTC │
	│ node    │ list -p multinode-519659                                                                                                                                  │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:27 UTC │                     │
	│ stop    │ -p multinode-519659                                                                                                                                       │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:27 UTC │ 02 Dec 25 20:30 UTC │
	│ start   │ -p multinode-519659 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:30 UTC │ 02 Dec 25 20:32 UTC │
	│ node    │ list -p multinode-519659                                                                                                                                  │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:32 UTC │                     │
	│ node    │ multinode-519659 node delete m03                                                                                                                          │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:32 UTC │ 02 Dec 25 20:32 UTC │
	│ stop    │ multinode-519659 stop                                                                                                                                     │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:32 UTC │ 02 Dec 25 20:35 UTC │
	│ start   │ -p multinode-519659 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:35 UTC │ 02 Dec 25 20:37 UTC │
	│ node    │ list -p multinode-519659                                                                                                                                  │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:37 UTC │                     │
	│ start   │ -p multinode-519659-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-519659-m02 │ jenkins │ v1.37.0 │ 02 Dec 25 20:37 UTC │                     │
	│ start   │ -p multinode-519659-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-519659-m03 │ jenkins │ v1.37.0 │ 02 Dec 25 20:37 UTC │ 02 Dec 25 20:37 UTC │
	│ node    │ add -p multinode-519659                                                                                                                                   │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:37 UTC │                     │
	│ delete  │ -p multinode-519659-m03                                                                                                                                   │ multinode-519659-m03 │ jenkins │ v1.37.0 │ 02 Dec 25 20:37 UTC │ 02 Dec 25 20:37 UTC │
	│ delete  │ -p multinode-519659                                                                                                                                       │ multinode-519659     │ jenkins │ v1.37.0 │ 02 Dec 25 20:37 UTC │ 02 Dec 25 20:37 UTC │
	│ start   │ -p test-preload-234694 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-234694  │ jenkins │ v1.37.0 │ 02 Dec 25 20:37 UTC │ 02 Dec 25 20:38 UTC │
	│ image   │ test-preload-234694 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-234694  │ jenkins │ v1.37.0 │ 02 Dec 25 20:38 UTC │ 02 Dec 25 20:38 UTC │
	│ stop    │ -p test-preload-234694                                                                                                                                    │ test-preload-234694  │ jenkins │ v1.37.0 │ 02 Dec 25 20:38 UTC │ 02 Dec 25 20:38 UTC │
	│ start   │ -p test-preload-234694 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-234694  │ jenkins │ v1.37.0 │ 02 Dec 25 20:38 UTC │ 02 Dec 25 20:39 UTC │
	│ image   │ test-preload-234694 image list                                                                                                                            │ test-preload-234694  │ jenkins │ v1.37.0 │ 02 Dec 25 20:39 UTC │ 02 Dec 25 20:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:38:57
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:38:57.972229  172629 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:38:57.972334  172629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:38:57.972340  172629 out.go:374] Setting ErrFile to fd 2...
	I1202 20:38:57.972344  172629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:38:57.972557  172629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:38:57.972983  172629 out.go:368] Setting JSON to false
	I1202 20:38:57.973827  172629 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8482,"bootTime":1764699456,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:38:57.973880  172629 start.go:143] virtualization: kvm guest
	I1202 20:38:57.976107  172629 out.go:179] * [test-preload-234694] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:38:57.977358  172629 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:38:57.977374  172629 notify.go:221] Checking for updates...
	I1202 20:38:57.980090  172629 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:38:57.981432  172629 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 20:38:57.982625  172629 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 20:38:57.983797  172629 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:38:57.985028  172629 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:38:57.986600  172629 config.go:182] Loaded profile config "test-preload-234694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:38:57.987152  172629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:38:58.022813  172629 out.go:179] * Using the kvm2 driver based on existing profile
	I1202 20:38:58.023916  172629 start.go:309] selected driver: kvm2
	I1202 20:38:58.023933  172629 start.go:927] validating driver "kvm2" against &{Name:test-preload-234694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-234694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:38:58.024034  172629 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:38:58.024938  172629 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:38:58.024985  172629 cni.go:84] Creating CNI manager for ""
	I1202 20:38:58.025037  172629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:38:58.025085  172629 start.go:353] cluster config:
	{Name:test-preload-234694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-234694 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:38:58.025174  172629 iso.go:125] acquiring lock: {Name:mkfe4a75ba73b1e7a1c7cd55dc23a305917e17a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:38:58.026610  172629 out.go:179] * Starting "test-preload-234694" primary control-plane node in "test-preload-234694" cluster
	I1202 20:38:58.027591  172629 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:38:58.027624  172629 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:38:58.027648  172629 cache.go:65] Caching tarball of preloaded images
	I1202 20:38:58.027785  172629 preload.go:238] Found /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:38:58.027801  172629 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:38:58.027893  172629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/config.json ...
	I1202 20:38:58.028091  172629 start.go:360] acquireMachinesLock for test-preload-234694: {Name:mk87259b3368832a6a6ed41448f2ab0149793b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 20:38:58.028139  172629 start.go:364] duration metric: took 27.642µs to acquireMachinesLock for "test-preload-234694"
	I1202 20:38:58.028159  172629 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:38:58.028168  172629 fix.go:54] fixHost starting: 
	I1202 20:38:58.030018  172629 fix.go:112] recreateIfNeeded on test-preload-234694: state=Stopped err=<nil>
	W1202 20:38:58.030042  172629 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 20:38:58.031879  172629 out.go:252] * Restarting existing kvm2 VM for "test-preload-234694" ...
	I1202 20:38:58.031918  172629 main.go:143] libmachine: starting domain...
	I1202 20:38:58.031932  172629 main.go:143] libmachine: ensuring networks are active...
	I1202 20:38:58.032954  172629 main.go:143] libmachine: Ensuring network default is active
	I1202 20:38:58.033460  172629 main.go:143] libmachine: Ensuring network mk-test-preload-234694 is active
	I1202 20:38:58.034006  172629 main.go:143] libmachine: getting domain XML...
	I1202 20:38:58.035759  172629 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-234694</name>
	  <uuid>e1afabd0-5380-46a5-bd71-ab4f3a8a020d</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21997-143119/.minikube/machines/test-preload-234694/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21997-143119/.minikube/machines/test-preload-234694/test-preload-234694.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:cb:bd:a9'/>
	      <source network='mk-test-preload-234694'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:fa:f8:2e'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1202 20:38:59.333157  172629 main.go:143] libmachine: waiting for domain to start...
	I1202 20:38:59.334984  172629 main.go:143] libmachine: domain is now running
	I1202 20:38:59.335013  172629 main.go:143] libmachine: waiting for IP...
	I1202 20:38:59.335929  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:38:59.336681  172629 main.go:143] libmachine: domain test-preload-234694 has current primary IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:38:59.336698  172629 main.go:143] libmachine: found domain IP: 192.168.39.179
	I1202 20:38:59.336706  172629 main.go:143] libmachine: reserving static IP address...
	I1202 20:38:59.337299  172629 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-234694", mac: "52:54:00:cb:bd:a9", ip: "192.168.39.179"} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:38:02 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:38:59.337332  172629 main.go:143] libmachine: skip adding static IP to network mk-test-preload-234694 - found existing host DHCP lease matching {name: "test-preload-234694", mac: "52:54:00:cb:bd:a9", ip: "192.168.39.179"}
	I1202 20:38:59.337342  172629 main.go:143] libmachine: reserved static IP address 192.168.39.179 for domain test-preload-234694
	I1202 20:38:59.337348  172629 main.go:143] libmachine: waiting for SSH...
	I1202 20:38:59.337355  172629 main.go:143] libmachine: Getting to WaitForSSH function...
	I1202 20:38:59.340303  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:38:59.340811  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:38:02 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:38:59.340843  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:38:59.341045  172629 main.go:143] libmachine: Using SSH client type: native
	I1202 20:38:59.341391  172629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I1202 20:38:59.341409  172629 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1202 20:39:02.414968  172629 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.179:22: connect: no route to host
	I1202 20:39:08.495001  172629 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.179:22: connect: no route to host
	I1202 20:39:11.609378  172629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:39:11.613528  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.614049  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:11.614080  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.614341  172629 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/config.json ...
	I1202 20:39:11.614562  172629 machine.go:94] provisionDockerMachine start ...
	I1202 20:39:11.617131  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.617717  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:11.617754  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.617984  172629 main.go:143] libmachine: Using SSH client type: native
	I1202 20:39:11.618294  172629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I1202 20:39:11.618315  172629 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:39:11.723389  172629 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 20:39:11.723419  172629 buildroot.go:166] provisioning hostname "test-preload-234694"
	I1202 20:39:11.726261  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.726688  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:11.726713  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.726906  172629 main.go:143] libmachine: Using SSH client type: native
	I1202 20:39:11.727172  172629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I1202 20:39:11.727185  172629 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-234694 && echo "test-preload-234694" | sudo tee /etc/hostname
	I1202 20:39:11.843868  172629 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-234694
	
	I1202 20:39:11.847026  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.847516  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:11.847545  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.847734  172629 main.go:143] libmachine: Using SSH client type: native
	I1202 20:39:11.847933  172629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I1202 20:39:11.847947  172629 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-234694' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-234694/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-234694' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:39:11.955854  172629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:39:11.955882  172629 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-143119/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-143119/.minikube}
	I1202 20:39:11.955934  172629 buildroot.go:174] setting up certificates
	I1202 20:39:11.955945  172629 provision.go:84] configureAuth start
	I1202 20:39:11.959049  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.959465  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:11.959501  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.962086  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.962585  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:11.962612  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:11.962780  172629 provision.go:143] copyHostCerts
	I1202 20:39:11.962827  172629 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem, removing ...
	I1202 20:39:11.962837  172629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem
	I1202 20:39:11.962904  172629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem (1082 bytes)
	I1202 20:39:11.962993  172629 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem, removing ...
	I1202 20:39:11.963002  172629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem
	I1202 20:39:11.963028  172629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem (1123 bytes)
	I1202 20:39:11.963089  172629 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem, removing ...
	I1202 20:39:11.963096  172629 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem
	I1202 20:39:11.963118  172629 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem (1675 bytes)
	I1202 20:39:11.963167  172629 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem org=jenkins.test-preload-234694 san=[127.0.0.1 192.168.39.179 localhost minikube test-preload-234694]
	I1202 20:39:12.039933  172629 provision.go:177] copyRemoteCerts
	I1202 20:39:12.040000  172629 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:39:12.042761  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.043175  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:12.043199  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.043330  172629 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/test-preload-234694/id_rsa Username:docker}
	I1202 20:39:12.125826  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:39:12.154435  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1202 20:39:12.182726  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 20:39:12.213514  172629 provision.go:87] duration metric: took 257.549839ms to configureAuth
	I1202 20:39:12.213549  172629 buildroot.go:189] setting minikube options for container-runtime
	I1202 20:39:12.213745  172629 config.go:182] Loaded profile config "test-preload-234694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:39:12.216549  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.216959  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:12.216979  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.217166  172629 main.go:143] libmachine: Using SSH client type: native
	I1202 20:39:12.217355  172629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I1202 20:39:12.217369  172629 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:39:12.482244  172629 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:39:12.482276  172629 machine.go:97] duration metric: took 867.69984ms to provisionDockerMachine
	I1202 20:39:12.482292  172629 start.go:293] postStartSetup for "test-preload-234694" (driver="kvm2")
	I1202 20:39:12.482306  172629 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:39:12.482386  172629 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:39:12.485276  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.485744  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:12.485772  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.485946  172629 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/test-preload-234694/id_rsa Username:docker}
	I1202 20:39:12.570532  172629 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:39:12.575389  172629 info.go:137] Remote host: Buildroot 2025.02
	I1202 20:39:12.575415  172629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/addons for local assets ...
	I1202 20:39:12.575499  172629 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/files for local assets ...
	I1202 20:39:12.575626  172629 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem -> 1470702.pem in /etc/ssl/certs
	I1202 20:39:12.575758  172629 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:39:12.587508  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:39:12.616799  172629 start.go:296] duration metric: took 134.486183ms for postStartSetup
	I1202 20:39:12.616842  172629 fix.go:56] duration metric: took 14.588673788s for fixHost
	I1202 20:39:12.619602  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.620010  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:12.620038  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.620228  172629 main.go:143] libmachine: Using SSH client type: native
	I1202 20:39:12.620448  172629 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.179 22 <nil> <nil>}
	I1202 20:39:12.620461  172629 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1202 20:39:12.722051  172629 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764707952.676407354
	
	I1202 20:39:12.722072  172629 fix.go:216] guest clock: 1764707952.676407354
	I1202 20:39:12.722093  172629 fix.go:229] Guest: 2025-12-02 20:39:12.676407354 +0000 UTC Remote: 2025-12-02 20:39:12.616846103 +0000 UTC m=+14.695040473 (delta=59.561251ms)
	I1202 20:39:12.722110  172629 fix.go:200] guest clock delta is within tolerance: 59.561251ms
	I1202 20:39:12.722114  172629 start.go:83] releasing machines lock for "test-preload-234694", held for 14.693963808s
	I1202 20:39:12.724957  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.725416  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:12.725449  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.726112  172629 ssh_runner.go:195] Run: cat /version.json
	I1202 20:39:12.726184  172629 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:39:12.729222  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.729555  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:12.729578  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.729723  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.729730  172629 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/test-preload-234694/id_rsa Username:docker}
	I1202 20:39:12.730180  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:12.730217  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:12.730391  172629 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/test-preload-234694/id_rsa Username:docker}
	I1202 20:39:12.805820  172629 ssh_runner.go:195] Run: systemctl --version
	I1202 20:39:12.840908  172629 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:39:12.988032  172629 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:39:12.994924  172629 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:39:12.994996  172629 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:39:13.015834  172629 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:39:13.015862  172629 start.go:496] detecting cgroup driver to use...
	I1202 20:39:13.015962  172629 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:39:13.036247  172629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:39:13.053908  172629 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:39:13.053989  172629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:39:13.071777  172629 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:39:13.088455  172629 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:39:13.236466  172629 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:39:13.461045  172629 docker.go:234] disabling docker service ...
	I1202 20:39:13.461124  172629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:39:13.477988  172629 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:39:13.495894  172629 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:39:13.658976  172629 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:39:13.806161  172629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:39:13.821575  172629 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:39:13.842825  172629 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:39:13.842906  172629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:39:13.855167  172629 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:39:13.855241  172629 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:39:13.867639  172629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:39:13.879943  172629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:39:13.892521  172629 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:39:13.905500  172629 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:39:13.917981  172629 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:39:13.938689  172629 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:39:13.950754  172629 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:39:13.960825  172629 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 20:39:13.960910  172629 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 20:39:13.982001  172629 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:39:13.993805  172629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:39:14.131833  172629 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:39:14.243594  172629 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:39:14.243692  172629 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:39:14.248956  172629 start.go:564] Will wait 60s for crictl version
	I1202 20:39:14.249041  172629 ssh_runner.go:195] Run: which crictl
	I1202 20:39:14.253115  172629 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 20:39:14.287927  172629 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 20:39:14.288032  172629 ssh_runner.go:195] Run: crio --version
	I1202 20:39:14.315728  172629 ssh_runner.go:195] Run: crio --version
	I1202 20:39:14.344958  172629 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1202 20:39:14.348807  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:14.349168  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:14.349193  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:14.349338  172629 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 20:39:14.353668  172629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:39:14.368939  172629 kubeadm.go:884] updating cluster {Name:test-preload-234694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-234694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:39:14.369128  172629 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:39:14.369182  172629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:39:14.403814  172629 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1202 20:39:14.403908  172629 ssh_runner.go:195] Run: which lz4
	I1202 20:39:14.408225  172629 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 20:39:14.413243  172629 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 20:39:14.413286  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1202 20:39:15.594086  172629 crio.go:462] duration metric: took 1.185911062s to copy over tarball
	I1202 20:39:15.594162  172629 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 20:39:17.032330  172629 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.438141977s)
	I1202 20:39:17.032353  172629 crio.go:469] duration metric: took 1.438239874s to extract the tarball
	I1202 20:39:17.032360  172629 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 20:39:17.067793  172629 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:39:17.104192  172629 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:39:17.104216  172629 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:39:17.104224  172629 kubeadm.go:935] updating node { 192.168.39.179 8443 v1.34.2 crio true true} ...
	I1202 20:39:17.104324  172629 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-234694 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-234694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:39:17.104387  172629 ssh_runner.go:195] Run: crio config
	I1202 20:39:17.150498  172629 cni.go:84] Creating CNI manager for ""
	I1202 20:39:17.150522  172629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:39:17.150538  172629 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:39:17.150559  172629 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.179 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-234694 NodeName:test-preload-234694 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:39:17.150699  172629 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-234694"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.179"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.179"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:39:17.150772  172629 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:39:17.162431  172629 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:39:17.162498  172629 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:39:17.173591  172629 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1202 20:39:17.192884  172629 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:39:17.211939  172629 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1202 20:39:17.231307  172629 ssh_runner.go:195] Run: grep 192.168.39.179	control-plane.minikube.internal$ /etc/hosts
	I1202 20:39:17.235299  172629 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:39:17.249168  172629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:39:17.382911  172629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:39:17.416555  172629 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694 for IP: 192.168.39.179
	I1202 20:39:17.416579  172629 certs.go:195] generating shared ca certs ...
	I1202 20:39:17.416597  172629 certs.go:227] acquiring lock for ca certs: {Name:mk4d0a32f0604330372f61cbe35af2ea6f3b6c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:39:17.416787  172629 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key
	I1202 20:39:17.416830  172629 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key
	I1202 20:39:17.416841  172629 certs.go:257] generating profile certs ...
	I1202 20:39:17.416921  172629 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/client.key
	I1202 20:39:17.416977  172629 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/apiserver.key.8aefd392
	I1202 20:39:17.417014  172629 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/proxy-client.key
	I1202 20:39:17.417121  172629 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem (1338 bytes)
	W1202 20:39:17.417149  172629 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070_empty.pem, impossibly tiny 0 bytes
	I1202 20:39:17.417159  172629 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:39:17.417182  172629 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:39:17.417205  172629 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:39:17.417227  172629 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem (1675 bytes)
	I1202 20:39:17.417266  172629 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:39:17.417823  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:39:17.446858  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:39:17.479329  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:39:17.508004  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:39:17.536742  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1202 20:39:17.565282  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:39:17.597333  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:39:17.628750  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1202 20:39:17.660688  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem --> /usr/share/ca-certificates/147070.pem (1338 bytes)
	I1202 20:39:17.693371  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /usr/share/ca-certificates/1470702.pem (1708 bytes)
	I1202 20:39:17.726167  172629 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:39:17.754307  172629 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:39:17.777215  172629 ssh_runner.go:195] Run: openssl version
	I1202 20:39:17.784007  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147070.pem && ln -fs /usr/share/ca-certificates/147070.pem /etc/ssl/certs/147070.pem"
	I1202 20:39:17.796505  172629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147070.pem
	I1202 20:39:17.801983  172629 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:57 /usr/share/ca-certificates/147070.pem
	I1202 20:39:17.802041  172629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147070.pem
	I1202 20:39:17.809145  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/147070.pem /etc/ssl/certs/51391683.0"
	I1202 20:39:17.821494  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470702.pem && ln -fs /usr/share/ca-certificates/1470702.pem /etc/ssl/certs/1470702.pem"
	I1202 20:39:17.833859  172629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470702.pem
	I1202 20:39:17.838936  172629 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:57 /usr/share/ca-certificates/1470702.pem
	I1202 20:39:17.838996  172629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470702.pem
	I1202 20:39:17.845817  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1470702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:39:17.857987  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:39:17.871224  172629 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:39:17.876288  172629 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:45 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:39:17.876358  172629 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:39:17.883514  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:39:17.896285  172629 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:39:17.901257  172629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:39:17.908242  172629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:39:17.915022  172629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:39:17.921923  172629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:39:17.928735  172629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:39:17.935394  172629 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:39:17.942062  172629 kubeadm.go:401] StartCluster: {Name:test-preload-234694 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-234694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:39:17.942181  172629 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:39:17.942238  172629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:39:17.975351  172629 cri.go:89] found id: ""
	I1202 20:39:17.975427  172629 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:39:17.987966  172629 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1202 20:39:17.987987  172629 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1202 20:39:17.988044  172629 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1202 20:39:17.999783  172629 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:39:18.000228  172629 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-234694" does not appear in /home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 20:39:18.000357  172629 kubeconfig.go:62] /home/jenkins/minikube-integration/21997-143119/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-234694" cluster setting kubeconfig missing "test-preload-234694" context setting]
	I1202 20:39:18.000719  172629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/kubeconfig: {Name:mk45f2610791f17b0d78039ad0468591c7331759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:39:18.001220  172629 kapi.go:59] client config for test-preload-234694: &rest.Config{Host:"https://192.168.39.179:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 20:39:18.001639  172629 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1202 20:39:18.001652  172629 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1202 20:39:18.001669  172629 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1202 20:39:18.001688  172629 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1202 20:39:18.001695  172629 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1202 20:39:18.002076  172629 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1202 20:39:18.013469  172629 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.179
	I1202 20:39:18.013521  172629 kubeadm.go:1161] stopping kube-system containers ...
	I1202 20:39:18.013538  172629 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1202 20:39:18.013612  172629 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:39:18.053085  172629 cri.go:89] found id: ""
	I1202 20:39:18.053177  172629 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1202 20:39:18.072254  172629 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:39:18.083641  172629 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:39:18.083691  172629 kubeadm.go:158] found existing configuration files:
	
	I1202 20:39:18.083753  172629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:39:18.094290  172629 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:39:18.094375  172629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:39:18.105496  172629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:39:18.118213  172629 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:39:18.118266  172629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:39:18.132891  172629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:39:18.145301  172629 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:39:18.145361  172629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:39:18.158436  172629 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:39:18.170462  172629 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:39:18.170518  172629 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:39:18.183520  172629 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:39:18.196670  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:39:18.257921  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:39:19.428361  172629 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.170387825s)
	I1202 20:39:19.428439  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:39:19.676267  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:39:19.749934  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:39:19.823856  172629 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:39:19.823949  172629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:39:20.324080  172629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:39:20.824139  172629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:39:21.324843  172629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:39:21.825033  172629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:39:21.879351  172629 api_server.go:72] duration metric: took 2.055510632s to wait for apiserver process to appear ...
	I1202 20:39:21.879386  172629 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:39:21.879414  172629 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I1202 20:39:21.880040  172629 api_server.go:269] stopped: https://192.168.39.179:8443/healthz: Get "https://192.168.39.179:8443/healthz": dial tcp 192.168.39.179:8443: connect: connection refused
	I1202 20:39:22.379915  172629 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I1202 20:39:24.774927  172629 api_server.go:279] https://192.168.39.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 20:39:24.774964  172629 api_server.go:103] status: https://192.168.39.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 20:39:24.774983  172629 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I1202 20:39:24.808557  172629 api_server.go:279] https://192.168.39.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1202 20:39:24.808587  172629 api_server.go:103] status: https://192.168.39.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1202 20:39:24.879987  172629 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I1202 20:39:24.891404  172629 api_server.go:279] https://192.168.39.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:39:24.891451  172629 api_server.go:103] status: https://192.168.39.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:39:25.380229  172629 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I1202 20:39:25.385028  172629 api_server.go:279] https://192.168.39.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:39:25.385057  172629 api_server.go:103] status: https://192.168.39.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:39:25.879745  172629 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I1202 20:39:25.886807  172629 api_server.go:279] https://192.168.39.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1202 20:39:25.886833  172629 api_server.go:103] status: https://192.168.39.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1202 20:39:26.379494  172629 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I1202 20:39:26.384072  172629 api_server.go:279] https://192.168.39.179:8443/healthz returned 200:
	ok
	I1202 20:39:26.390960  172629 api_server.go:141] control plane version: v1.34.2
	I1202 20:39:26.390989  172629 api_server.go:131] duration metric: took 4.51159464s to wait for apiserver health ...
	I1202 20:39:26.390999  172629 cni.go:84] Creating CNI manager for ""
	I1202 20:39:26.391005  172629 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:39:26.392963  172629 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 20:39:26.394204  172629 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 20:39:26.406472  172629 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 20:39:26.431013  172629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:39:26.437159  172629 system_pods.go:59] 7 kube-system pods found
	I1202 20:39:26.437217  172629 system_pods.go:61] "coredns-66bc5c9577-zhc9c" [614a15b2-aa56-4a03-bf8e-08149495e90e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:39:26.437232  172629 system_pods.go:61] "etcd-test-preload-234694" [97f237a5-a046-42a1-958a-c459d05ff478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:39:26.437244  172629 system_pods.go:61] "kube-apiserver-test-preload-234694" [a8c307e0-d08e-4269-87c8-cfdf2cf847f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:39:26.437253  172629 system_pods.go:61] "kube-controller-manager-test-preload-234694" [247f8485-ddb2-42f9-bc6d-509efe04c6f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:39:26.437262  172629 system_pods.go:61] "kube-proxy-6kqv5" [db7a9e54-1339-431a-8aa5-1f84fab00a57] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:39:26.437272  172629 system_pods.go:61] "kube-scheduler-test-preload-234694" [cc83b0bf-aab5-4560-a6e9-93fe6446b092] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:39:26.437284  172629 system_pods.go:61] "storage-provisioner" [6b6a7bea-1aa7-49a9-8858-3ca632d9e66f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:39:26.437295  172629 system_pods.go:74] duration metric: took 6.254901ms to wait for pod list to return data ...
	I1202 20:39:26.437308  172629 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:39:26.441038  172629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 20:39:26.441063  172629 node_conditions.go:123] node cpu capacity is 2
	I1202 20:39:26.441076  172629 node_conditions.go:105] duration metric: took 3.76188ms to run NodePressure ...
	I1202 20:39:26.441126  172629 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1202 20:39:26.696804  172629 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1202 20:39:26.701463  172629 kubeadm.go:744] kubelet initialised
	I1202 20:39:26.701484  172629 kubeadm.go:745] duration metric: took 4.653185ms waiting for restarted kubelet to initialise ...
	I1202 20:39:26.701502  172629 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:39:26.718972  172629 ops.go:34] apiserver oom_adj: -16
	I1202 20:39:26.718997  172629 kubeadm.go:602] duration metric: took 8.731002977s to restartPrimaryControlPlane
	I1202 20:39:26.719011  172629 kubeadm.go:403] duration metric: took 8.776957803s to StartCluster
	I1202 20:39:26.719032  172629 settings.go:142] acquiring lock: {Name:mka4c337368f188b532e41dc38505f24fc351556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:39:26.719128  172629 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 20:39:26.719929  172629 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/kubeconfig: {Name:mk45f2610791f17b0d78039ad0468591c7331759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:39:26.720226  172629 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.179 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:39:26.720329  172629 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:39:26.720420  172629 config.go:182] Loaded profile config "test-preload-234694": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:39:26.720458  172629 addons.go:70] Setting storage-provisioner=true in profile "test-preload-234694"
	I1202 20:39:26.720480  172629 addons.go:70] Setting default-storageclass=true in profile "test-preload-234694"
	I1202 20:39:26.720500  172629 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-234694"
	I1202 20:39:26.720485  172629 addons.go:239] Setting addon storage-provisioner=true in "test-preload-234694"
	W1202 20:39:26.720590  172629 addons.go:248] addon storage-provisioner should already be in state true
	I1202 20:39:26.720620  172629 host.go:66] Checking if "test-preload-234694" exists ...
	I1202 20:39:26.721943  172629 out.go:179] * Verifying Kubernetes components...
	I1202 20:39:26.723188  172629 kapi.go:59] client config for test-preload-234694: &rest.Config{Host:"https://192.168.39.179:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 20:39:26.723268  172629 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:39:26.723309  172629 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:39:26.723507  172629 addons.go:239] Setting addon default-storageclass=true in "test-preload-234694"
	W1202 20:39:26.723524  172629 addons.go:248] addon default-storageclass should already be in state true
	I1202 20:39:26.723544  172629 host.go:66] Checking if "test-preload-234694" exists ...
	I1202 20:39:26.724465  172629 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:39:26.724489  172629 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:39:26.725436  172629 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:39:26.725453  172629 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:39:26.728155  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:26.728192  172629 main.go:143] libmachine: domain test-preload-234694 has defined MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:26.728692  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:26.728725  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:26.728734  172629 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cb:bd:a9", ip: ""} in network mk-test-preload-234694: {Iface:virbr1 ExpiryTime:2025-12-02 21:39:09 +0000 UTC Type:0 Mac:52:54:00:cb:bd:a9 Iaid: IPaddr:192.168.39.179 Prefix:24 Hostname:test-preload-234694 Clientid:01:52:54:00:cb:bd:a9}
	I1202 20:39:26.728774  172629 main.go:143] libmachine: domain test-preload-234694 has defined IP address 192.168.39.179 and MAC address 52:54:00:cb:bd:a9 in network mk-test-preload-234694
	I1202 20:39:26.728866  172629 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/test-preload-234694/id_rsa Username:docker}
	I1202 20:39:26.729100  172629 sshutil.go:53] new ssh client: &{IP:192.168.39.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/test-preload-234694/id_rsa Username:docker}
	I1202 20:39:27.038407  172629 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:39:27.084271  172629 node_ready.go:35] waiting up to 6m0s for node "test-preload-234694" to be "Ready" ...
	I1202 20:39:27.090270  172629 node_ready.go:49] node "test-preload-234694" is "Ready"
	I1202 20:39:27.090312  172629 node_ready.go:38] duration metric: took 5.993287ms for node "test-preload-234694" to be "Ready" ...
	I1202 20:39:27.090334  172629 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:39:27.090406  172629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:39:27.149692  172629 api_server.go:72] duration metric: took 429.420613ms to wait for apiserver process to appear ...
	I1202 20:39:27.149735  172629 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:39:27.149771  172629 api_server.go:253] Checking apiserver healthz at https://192.168.39.179:8443/healthz ...
	I1202 20:39:27.161204  172629 api_server.go:279] https://192.168.39.179:8443/healthz returned 200:
	ok
	I1202 20:39:27.164733  172629 api_server.go:141] control plane version: v1.34.2
	I1202 20:39:27.164768  172629 api_server.go:131] duration metric: took 15.022737ms to wait for apiserver health ...
	I1202 20:39:27.164781  172629 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:39:27.168800  172629 system_pods.go:59] 7 kube-system pods found
	I1202 20:39:27.168827  172629 system_pods.go:61] "coredns-66bc5c9577-zhc9c" [614a15b2-aa56-4a03-bf8e-08149495e90e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:39:27.168834  172629 system_pods.go:61] "etcd-test-preload-234694" [97f237a5-a046-42a1-958a-c459d05ff478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:39:27.168842  172629 system_pods.go:61] "kube-apiserver-test-preload-234694" [a8c307e0-d08e-4269-87c8-cfdf2cf847f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:39:27.168847  172629 system_pods.go:61] "kube-controller-manager-test-preload-234694" [247f8485-ddb2-42f9-bc6d-509efe04c6f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:39:27.168852  172629 system_pods.go:61] "kube-proxy-6kqv5" [db7a9e54-1339-431a-8aa5-1f84fab00a57] Running
	I1202 20:39:27.168859  172629 system_pods.go:61] "kube-scheduler-test-preload-234694" [cc83b0bf-aab5-4560-a6e9-93fe6446b092] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:39:27.168863  172629 system_pods.go:61] "storage-provisioner" [6b6a7bea-1aa7-49a9-8858-3ca632d9e66f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:39:27.168870  172629 system_pods.go:74] duration metric: took 4.083145ms to wait for pod list to return data ...
	I1202 20:39:27.168877  172629 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:39:27.171356  172629 default_sa.go:45] found service account: "default"
	I1202 20:39:27.171375  172629 default_sa.go:55] duration metric: took 2.492968ms for default service account to be created ...
	I1202 20:39:27.171383  172629 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:39:27.181782  172629 system_pods.go:86] 7 kube-system pods found
	I1202 20:39:27.181822  172629 system_pods.go:89] "coredns-66bc5c9577-zhc9c" [614a15b2-aa56-4a03-bf8e-08149495e90e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:39:27.181836  172629 system_pods.go:89] "etcd-test-preload-234694" [97f237a5-a046-42a1-958a-c459d05ff478] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:39:27.181848  172629 system_pods.go:89] "kube-apiserver-test-preload-234694" [a8c307e0-d08e-4269-87c8-cfdf2cf847f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:39:27.181856  172629 system_pods.go:89] "kube-controller-manager-test-preload-234694" [247f8485-ddb2-42f9-bc6d-509efe04c6f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:39:27.181862  172629 system_pods.go:89] "kube-proxy-6kqv5" [db7a9e54-1339-431a-8aa5-1f84fab00a57] Running
	I1202 20:39:27.181884  172629 system_pods.go:89] "kube-scheduler-test-preload-234694" [cc83b0bf-aab5-4560-a6e9-93fe6446b092] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:39:27.181928  172629 system_pods.go:89] "storage-provisioner" [6b6a7bea-1aa7-49a9-8858-3ca632d9e66f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:39:27.181949  172629 system_pods.go:126] duration metric: took 10.561735ms to wait for k8s-apps to be running ...
	I1202 20:39:27.181958  172629 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:39:27.182015  172629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:39:27.230980  172629 system_svc.go:56] duration metric: took 49.005027ms WaitForService to wait for kubelet
	I1202 20:39:27.231025  172629 kubeadm.go:587] duration metric: took 510.759688ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:39:27.231055  172629 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:39:27.235918  172629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:39:27.240622  172629 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:39:27.244373  172629 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 20:39:27.244407  172629 node_conditions.go:123] node cpu capacity is 2
	I1202 20:39:27.244425  172629 node_conditions.go:105] duration metric: took 13.361967ms to run NodePressure ...
	I1202 20:39:27.244442  172629 start.go:242] waiting for startup goroutines ...
	I1202 20:39:27.980086  172629 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1202 20:39:27.981624  172629 addons.go:530] duration metric: took 1.261292163s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1202 20:39:27.981698  172629 start.go:247] waiting for cluster config update ...
	I1202 20:39:27.981717  172629 start.go:256] writing updated cluster config ...
	I1202 20:39:27.982134  172629 ssh_runner.go:195] Run: rm -f paused
	I1202 20:39:27.993007  172629 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:39:27.993599  172629 kapi.go:59] client config for test-preload-234694: &rest.Config{Host:"https://192.168.39.179:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/client.crt", KeyFile:"/home/jenkins/minikube-integration/21997-143119/.minikube/profiles/test-preload-234694/client.key", CAFile:"/home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1202 20:39:28.004430  172629 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zhc9c" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:39:30.011180  172629 pod_ready.go:104] pod "coredns-66bc5c9577-zhc9c" is not "Ready", error: <nil>
	W1202 20:39:32.012544  172629 pod_ready.go:104] pod "coredns-66bc5c9577-zhc9c" is not "Ready", error: <nil>
	W1202 20:39:34.013603  172629 pod_ready.go:104] pod "coredns-66bc5c9577-zhc9c" is not "Ready", error: <nil>
	I1202 20:39:34.510862  172629 pod_ready.go:94] pod "coredns-66bc5c9577-zhc9c" is "Ready"
	I1202 20:39:34.510916  172629 pod_ready.go:86] duration metric: took 6.506439401s for pod "coredns-66bc5c9577-zhc9c" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:34.513650  172629 pod_ready.go:83] waiting for pod "etcd-test-preload-234694" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:34.518575  172629 pod_ready.go:94] pod "etcd-test-preload-234694" is "Ready"
	I1202 20:39:34.518605  172629 pod_ready.go:86] duration metric: took 4.920966ms for pod "etcd-test-preload-234694" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:34.520616  172629 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-234694" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:39:36.526580  172629 pod_ready.go:104] pod "kube-apiserver-test-preload-234694" is not "Ready", error: <nil>
	I1202 20:39:38.026943  172629 pod_ready.go:94] pod "kube-apiserver-test-preload-234694" is "Ready"
	I1202 20:39:38.026970  172629 pod_ready.go:86] duration metric: took 3.506333196s for pod "kube-apiserver-test-preload-234694" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:38.029968  172629 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-234694" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:38.035770  172629 pod_ready.go:94] pod "kube-controller-manager-test-preload-234694" is "Ready"
	I1202 20:39:38.035794  172629 pod_ready.go:86] duration metric: took 5.806485ms for pod "kube-controller-manager-test-preload-234694" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:38.038226  172629 pod_ready.go:83] waiting for pod "kube-proxy-6kqv5" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:38.110079  172629 pod_ready.go:94] pod "kube-proxy-6kqv5" is "Ready"
	I1202 20:39:38.110108  172629 pod_ready.go:86] duration metric: took 71.859664ms for pod "kube-proxy-6kqv5" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:38.308556  172629 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-234694" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:38.709464  172629 pod_ready.go:94] pod "kube-scheduler-test-preload-234694" is "Ready"
	I1202 20:39:38.709508  172629 pod_ready.go:86] duration metric: took 400.916488ms for pod "kube-scheduler-test-preload-234694" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:39:38.709529  172629 pod_ready.go:40] duration metric: took 10.71646767s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:39:38.754241  172629 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1202 20:39:38.756228  172629 out.go:179] * Done! kubectl is now configured to use "test-preload-234694" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.517217565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764707979517191796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8ca9098-75cd-4ac0-84bc-16e0b74ae988 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.518481980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e20a58d2-3e6a-47e4-8caf-4ff8970dae9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.518686827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e20a58d2-3e6a-47e4-8caf-4ff8970dae9e name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.519285860Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ae98a540d97cf8450d31dc1d5a4eb65b0c6ab3a9d5b5d05abffe22f984a811b,PodSandboxId:4c4f80dd0108705b911d43ff1a5baa12cf107bb5e1f4da7f4af3686a89e04bdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764707972909823864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zhc9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614a15b2-aa56-4a03-bf8e-08149495e90e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91bd11e4d3c6a9ed80af2693aa7988f3aa06dcdba2b1d7c38a6cb5d90a8b2fe,PodSandboxId:50f955d97e7d6927e21b19a9e3fb2ee7777e41077f4d516592edb7a503646e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1764707966942128228,Label
s:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6a7bea-1aa7-49a9-8858-3ca632d9e66f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14416cc63f7abfc4565f9b9c9838e0e48e589ed8ddd0af5ea9dbcec5e057782a,PodSandboxId:d5d645c0ce64e21f21e358b4bfb482f51109b2b6008986d79db1e37a98c1dcd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764707966657834521,Labels:map[string]string{i
o.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kqv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7a9e54-1339-431a-8aa5-1f84fab00a57,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be61622e7fd2bc98b8b7894acb407a1442d8f2f260ddb20445f60cf3f3d7eb,PodSandboxId:ab59dcddfd6efbf812fc2ff1174426b6d3de6e2611227d6ab01173c2b71d900f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764707961572183215,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6bef1e37c1d8e05608c57a0c7b0472d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99207efd185003aee92dae6220c0aa926af7bc214e5e01ccef62ae139d7955a6,PodSandboxId:76e252d3bf96fa76b4478bdea385cad5143ead44cbf4c4d5f15c201a42d6c1a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6
529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764707961540723966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3544bee3f849c21542ebe3f8347f2b52,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195e2b9a7db27c1a3ad8333573d0be1c4570f0a8fb7fdebb625cba8ab731cc53,PodSandboxId:746c85e89fa9c4dccf81db4dc04083a03ea078376ebb60dea97f131fa5fafea0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764707961564142813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0b9e18c6626068d8fed03c4baae4b6,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa2e190ee5a4c6f05f9e33e181682ddb39570fb6251f6b03bf0676d4e3ea08,PodSandboxId:8c6a8d3350a1a74cbb9559d48bd81f1bea8ed38a2ab318f95911ae6996abc695,Metadata:&ContainerMetadata{Name:etcd,At
tempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764707961465009158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7a53997a71dda9fe5b97f181c050bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e20a58d2-3e6a-47e4-8caf-4ff8970dae9e name=/runtime.v1.RuntimeService
/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.554838793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47c7840d-dbec-4712-897c-9b1612c2da1f name=/runtime.v1.RuntimeService/Version
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.554960856Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47c7840d-dbec-4712-897c-9b1612c2da1f name=/runtime.v1.RuntimeService/Version
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.556359374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f445860-456c-4c25-a844-91b5b6b618af name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.556956911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764707979556932338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f445860-456c-4c25-a844-91b5b6b618af name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.558095386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6baea40-1a16-4b91-8017-704de5196682 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.558236950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6baea40-1a16-4b91-8017-704de5196682 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.558417041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ae98a540d97cf8450d31dc1d5a4eb65b0c6ab3a9d5b5d05abffe22f984a811b,PodSandboxId:4c4f80dd0108705b911d43ff1a5baa12cf107bb5e1f4da7f4af3686a89e04bdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764707972909823864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zhc9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614a15b2-aa56-4a03-bf8e-08149495e90e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91bd11e4d3c6a9ed80af2693aa7988f3aa06dcdba2b1d7c38a6cb5d90a8b2fe,PodSandboxId:50f955d97e7d6927e21b19a9e3fb2ee7777e41077f4d516592edb7a503646e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1764707966942128228,Label
s:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6a7bea-1aa7-49a9-8858-3ca632d9e66f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14416cc63f7abfc4565f9b9c9838e0e48e589ed8ddd0af5ea9dbcec5e057782a,PodSandboxId:d5d645c0ce64e21f21e358b4bfb482f51109b2b6008986d79db1e37a98c1dcd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764707966657834521,Labels:map[string]string{i
o.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kqv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7a9e54-1339-431a-8aa5-1f84fab00a57,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be61622e7fd2bc98b8b7894acb407a1442d8f2f260ddb20445f60cf3f3d7eb,PodSandboxId:ab59dcddfd6efbf812fc2ff1174426b6d3de6e2611227d6ab01173c2b71d900f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764707961572183215,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6bef1e37c1d8e05608c57a0c7b0472d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99207efd185003aee92dae6220c0aa926af7bc214e5e01ccef62ae139d7955a6,PodSandboxId:76e252d3bf96fa76b4478bdea385cad5143ead44cbf4c4d5f15c201a42d6c1a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6
529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764707961540723966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3544bee3f849c21542ebe3f8347f2b52,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195e2b9a7db27c1a3ad8333573d0be1c4570f0a8fb7fdebb625cba8ab731cc53,PodSandboxId:746c85e89fa9c4dccf81db4dc04083a03ea078376ebb60dea97f131fa5fafea0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764707961564142813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0b9e18c6626068d8fed03c4baae4b6,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa2e190ee5a4c6f05f9e33e181682ddb39570fb6251f6b03bf0676d4e3ea08,PodSandboxId:8c6a8d3350a1a74cbb9559d48bd81f1bea8ed38a2ab318f95911ae6996abc695,Metadata:&ContainerMetadata{Name:etcd,At
tempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764707961465009158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7a53997a71dda9fe5b97f181c050bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6baea40-1a16-4b91-8017-704de5196682 name=/runtime.v1.RuntimeService
/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.593667254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b8ebd66-e72e-4011-b781-9a373ab39f76 name=/runtime.v1.RuntimeService/Version
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.593792967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b8ebd66-e72e-4011-b781-9a373ab39f76 name=/runtime.v1.RuntimeService/Version
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.595248251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5dd6963c-0354-4393-8dde-a1060a39b81b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.595684448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764707979595660386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5dd6963c-0354-4393-8dde-a1060a39b81b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.596491864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b69b9ce-afb6-400f-952b-d16e4e899df0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.596551614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b69b9ce-afb6-400f-952b-d16e4e899df0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.596764610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ae98a540d97cf8450d31dc1d5a4eb65b0c6ab3a9d5b5d05abffe22f984a811b,PodSandboxId:4c4f80dd0108705b911d43ff1a5baa12cf107bb5e1f4da7f4af3686a89e04bdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764707972909823864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zhc9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614a15b2-aa56-4a03-bf8e-08149495e90e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91bd11e4d3c6a9ed80af2693aa7988f3aa06dcdba2b1d7c38a6cb5d90a8b2fe,PodSandboxId:50f955d97e7d6927e21b19a9e3fb2ee7777e41077f4d516592edb7a503646e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1764707966942128228,Label
s:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6a7bea-1aa7-49a9-8858-3ca632d9e66f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14416cc63f7abfc4565f9b9c9838e0e48e589ed8ddd0af5ea9dbcec5e057782a,PodSandboxId:d5d645c0ce64e21f21e358b4bfb482f51109b2b6008986d79db1e37a98c1dcd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764707966657834521,Labels:map[string]string{i
o.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kqv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7a9e54-1339-431a-8aa5-1f84fab00a57,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be61622e7fd2bc98b8b7894acb407a1442d8f2f260ddb20445f60cf3f3d7eb,PodSandboxId:ab59dcddfd6efbf812fc2ff1174426b6d3de6e2611227d6ab01173c2b71d900f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764707961572183215,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6bef1e37c1d8e05608c57a0c7b0472d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99207efd185003aee92dae6220c0aa926af7bc214e5e01ccef62ae139d7955a6,PodSandboxId:76e252d3bf96fa76b4478bdea385cad5143ead44cbf4c4d5f15c201a42d6c1a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6
529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764707961540723966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3544bee3f849c21542ebe3f8347f2b52,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195e2b9a7db27c1a3ad8333573d0be1c4570f0a8fb7fdebb625cba8ab731cc53,PodSandboxId:746c85e89fa9c4dccf81db4dc04083a03ea078376ebb60dea97f131fa5fafea0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764707961564142813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0b9e18c6626068d8fed03c4baae4b6,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa2e190ee5a4c6f05f9e33e181682ddb39570fb6251f6b03bf0676d4e3ea08,PodSandboxId:8c6a8d3350a1a74cbb9559d48bd81f1bea8ed38a2ab318f95911ae6996abc695,Metadata:&ContainerMetadata{Name:etcd,At
tempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764707961465009158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7a53997a71dda9fe5b97f181c050bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b69b9ce-afb6-400f-952b-d16e4e899df0 name=/runtime.v1.RuntimeService
/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.625512475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2155eeaf-d376-403e-9e20-ae6b8a1846b7 name=/runtime.v1.RuntimeService/Version
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.625635012Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2155eeaf-d376-403e-9e20-ae6b8a1846b7 name=/runtime.v1.RuntimeService/Version
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.626888030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3169f62c-94bf-4b4c-abd8-8c11e058ec9c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.627547899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764707979627519238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3169f62c-94bf-4b4c-abd8-8c11e058ec9c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.628542562Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dbcc17f-e1ae-4f3d-9778-07aee385f14d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.628644790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dbcc17f-e1ae-4f3d-9778-07aee385f14d name=/runtime.v1.RuntimeService/ListContainers
	Dec 02 20:39:39 test-preload-234694 crio[836]: time="2025-12-02 20:39:39.629525046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1ae98a540d97cf8450d31dc1d5a4eb65b0c6ab3a9d5b5d05abffe22f984a811b,PodSandboxId:4c4f80dd0108705b911d43ff1a5baa12cf107bb5e1f4da7f4af3686a89e04bdd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764707972909823864,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-zhc9c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614a15b2-aa56-4a03-bf8e-08149495e90e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b91bd11e4d3c6a9ed80af2693aa7988f3aa06dcdba2b1d7c38a6cb5d90a8b2fe,PodSandboxId:50f955d97e7d6927e21b19a9e3fb2ee7777e41077f4d516592edb7a503646e84,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1764707966942128228,Label
s:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b6a7bea-1aa7-49a9-8858-3ca632d9e66f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14416cc63f7abfc4565f9b9c9838e0e48e589ed8ddd0af5ea9dbcec5e057782a,PodSandboxId:d5d645c0ce64e21f21e358b4bfb482f51109b2b6008986d79db1e37a98c1dcd8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1764707966657834521,Labels:map[string]string{i
o.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6kqv5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db7a9e54-1339-431a-8aa5-1f84fab00a57,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59be61622e7fd2bc98b8b7894acb407a1442d8f2f260ddb20445f60cf3f3d7eb,PodSandboxId:ab59dcddfd6efbf812fc2ff1174426b6d3de6e2611227d6ab01173c2b71d900f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1764707961572183215,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6bef1e37c1d8e05608c57a0c7b0472d,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99207efd185003aee92dae6220c0aa926af7bc214e5e01ccef62ae139d7955a6,PodSandboxId:76e252d3bf96fa76b4478bdea385cad5143ead44cbf4c4d5f15c201a42d6c1a2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6
529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1764707961540723966,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3544bee3f849c21542ebe3f8347f2b52,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195e2b9a7db27c1a3ad8333573d0be1c4570f0a8fb7fdebb625cba8ab731cc53,PodSandboxId:746c85e89fa9c4dccf81db4dc04083a03ea078376ebb60dea97f131fa5fafea0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1764707961564142813,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea0b9e18c6626068d8fed03c4baae4b6,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa2e190ee5a4c6f05f9e33e181682ddb39570fb6251f6b03bf0676d4e3ea08,PodSandboxId:8c6a8d3350a1a74cbb9559d48bd81f1bea8ed38a2ab318f95911ae6996abc695,Metadata:&ContainerMetadata{Name:etcd,At
tempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1764707961465009158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-234694,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7a53997a71dda9fe5b97f181c050bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dbcc17f-e1ae-4f3d-9778-07aee385f14d name=/runtime.v1.RuntimeService
/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	1ae98a540d97c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 seconds ago       Running             coredns                   1                   4c4f80dd01087       coredns-66bc5c9577-zhc9c                      kube-system
	b91bd11e4d3c6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Exited              storage-provisioner       2                   50f955d97e7d6       storage-provisioner                           kube-system
	14416cc63f7ab       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   13 seconds ago      Running             kube-proxy                1                   d5d645c0ce64e       kube-proxy-6kqv5                              kube-system
	59be61622e7fd       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   18 seconds ago      Running             kube-controller-manager   1                   ab59dcddfd6ef       kube-controller-manager-test-preload-234694   kube-system
	195e2b9a7db27       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   18 seconds ago      Running             kube-apiserver            1                   746c85e89fa9c       kube-apiserver-test-preload-234694            kube-system
	99207efd18500       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   18 seconds ago      Running             kube-scheduler            1                   76e252d3bf96f       kube-scheduler-test-preload-234694            kube-system
	93aa2e190ee5a       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   18 seconds ago      Running             etcd                      1                   8c6a8d3350a1a       etcd-test-preload-234694                      kube-system
	
	
	==> coredns [1ae98a540d97cf8450d31dc1d5a4eb65b0c6ab3a9d5b5d05abffe22f984a811b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55045 - 5108 "HINFO IN 3951347955126807882.7177957977010183602. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025574004s
	
	
	==> describe nodes <==
	Name:               test-preload-234694
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-234694
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92
	                    minikube.k8s.io/name=test-preload-234694
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_02T20_38_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 02 Dec 2025 20:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-234694
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 02 Dec 2025 20:39:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 02 Dec 2025 20:39:26 +0000   Tue, 02 Dec 2025 20:38:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 02 Dec 2025 20:39:26 +0000   Tue, 02 Dec 2025 20:38:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 02 Dec 2025 20:39:26 +0000   Tue, 02 Dec 2025 20:38:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 02 Dec 2025 20:39:26 +0000   Tue, 02 Dec 2025 20:39:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.179
	  Hostname:    test-preload-234694
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1afabd0538046a5bd71ab4f3a8a020d
	  System UUID:                e1afabd0-5380-46a5-bd71-ab4f3a8a020d
	  Boot ID:                    4ee46c87-9ebe-482d-9759-2bc4963e6b86
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-zhc9c                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     62s
	  kube-system                 etcd-test-preload-234694                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         68s
	  kube-system                 kube-apiserver-test-preload-234694             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-test-preload-234694    200m (10%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-proxy-6kqv5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-scheduler-test-preload-234694             100m (5%)     0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 61s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node test-preload-234694 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node test-preload-234694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node test-preload-234694 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    68s                kubelet          Node test-preload-234694 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  68s                kubelet          Node test-preload-234694 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     68s                kubelet          Node test-preload-234694 status is now: NodeHasSufficientPID
	  Normal   Starting                 68s                kubelet          Starting kubelet.
	  Normal   NodeReady                67s                kubelet          Node test-preload-234694 status is now: NodeReady
	  Normal   RegisteredNode           64s                node-controller  Node test-preload-234694 event: Registered Node test-preload-234694 in Controller
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-234694 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-234694 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-234694 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-234694 has been rebooted, boot id: 4ee46c87-9ebe-482d-9759-2bc4963e6b86
	  Normal   RegisteredNode           11s                node-controller  Node test-preload-234694 event: Registered Node test-preload-234694 in Controller
	
	
	==> dmesg <==
	[Dec 2 20:39] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001485] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001072] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.976634] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.101834] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.493725] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 172 callbacks suppressed
	
	
	==> etcd [93aa2e190ee5a4c6f05f9e33e181682ddb39570fb6251f6b03bf0676d4e3ea08] <==
	{"level":"warn","ts":"2025-12-02T20:39:23.752401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.774515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.787171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.801217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.812403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.849247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.851946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.864319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.879166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.889578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.899449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.918482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.923940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.936369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.947439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.957101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.970128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.978175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:23.993202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:24.003105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:24.012116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:24.027321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:24.035648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:24.049587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-02T20:39:24.138280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38044","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:39:39 up 0 min,  0 users,  load average: 0.72, 0.19, 0.06
	Linux test-preload-234694 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [195e2b9a7db27c1a3ad8333573d0be1c4570f0a8fb7fdebb625cba8ab731cc53] <==
	I1202 20:39:24.801117       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1202 20:39:24.801217       1 policy_source.go:240] refreshing policies
	I1202 20:39:24.813898       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1202 20:39:24.817468       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1202 20:39:24.842283       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1202 20:39:24.842443       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1202 20:39:24.842550       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1202 20:39:24.845501       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1202 20:39:24.846245       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1202 20:39:24.848139       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1202 20:39:24.849365       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1202 20:39:24.858450       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1202 20:39:24.859786       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1202 20:39:24.871810       1 cache.go:39] Caches are synced for autoregister controller
	I1202 20:39:24.877678       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1202 20:39:24.881825       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1202 20:39:24.912173       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1202 20:39:25.647176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1202 20:39:26.495427       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1202 20:39:26.535338       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1202 20:39:26.578387       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1202 20:39:26.593770       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1202 20:39:28.150648       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1202 20:39:28.400794       1 controller.go:667] quota admission added evaluator for: endpoints
	I1202 20:39:28.501272       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [59be61622e7fd2bc98b8b7894acb407a1442d8f2f260ddb20445f60cf3f3d7eb] <==
	I1202 20:39:28.155419       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1202 20:39:28.155428       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1202 20:39:28.155434       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1202 20:39:28.155489       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1202 20:39:28.157644       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1202 20:39:28.162426       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1202 20:39:28.175747       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1202 20:39:28.176836       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:39:28.180137       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1202 20:39:28.180200       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1202 20:39:28.180209       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1202 20:39:28.180463       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1202 20:39:28.181776       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1202 20:39:28.181882       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1202 20:39:28.184758       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1202 20:39:28.186631       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1202 20:39:28.190333       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1202 20:39:28.194699       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1202 20:39:28.197024       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1202 20:39:28.197133       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1202 20:39:28.197143       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1202 20:39:28.198366       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1202 20:39:28.198496       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-234694"
	I1202 20:39:28.198564       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1202 20:39:28.197154       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	
	
	==> kube-proxy [14416cc63f7abfc4565f9b9c9838e0e48e589ed8ddd0af5ea9dbcec5e057782a] <==
	I1202 20:39:27.199340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1202 20:39:27.311285       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1202 20:39:27.311334       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.179"]
	E1202 20:39:27.311867       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1202 20:39:27.399956       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1202 20:39:27.400155       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1202 20:39:27.400253       1 server_linux.go:132] "Using iptables Proxier"
	I1202 20:39:27.424919       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1202 20:39:27.425242       1 server.go:527] "Version info" version="v1.34.2"
	I1202 20:39:27.425261       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:39:27.438737       1 config.go:200] "Starting service config controller"
	I1202 20:39:27.438767       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1202 20:39:27.438786       1 config.go:106] "Starting endpoint slice config controller"
	I1202 20:39:27.438789       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1202 20:39:27.438812       1 config.go:403] "Starting serviceCIDR config controller"
	I1202 20:39:27.438816       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1202 20:39:27.441917       1 config.go:309] "Starting node config controller"
	I1202 20:39:27.441948       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1202 20:39:27.441955       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1202 20:39:27.539795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1202 20:39:27.539843       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1202 20:39:27.539898       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [99207efd185003aee92dae6220c0aa926af7bc214e5e01ccef62ae139d7955a6] <==
	I1202 20:39:24.752974       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1202 20:39:24.772114       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:39:24.775603       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1202 20:39:24.772592       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1202 20:39:24.772608       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1202 20:39:24.785319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1202 20:39:24.785411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1202 20:39:24.785470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1202 20:39:24.788388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1202 20:39:24.788489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1202 20:39:24.788565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1202 20:39:24.788631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1202 20:39:24.788727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1202 20:39:24.788806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1202 20:39:24.788872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1202 20:39:24.788943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1202 20:39:24.789003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1202 20:39:24.789783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1202 20:39:24.789913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1202 20:39:24.790189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1202 20:39:24.790749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1202 20:39:24.791966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1202 20:39:24.808152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1202 20:39:24.809504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1202 20:39:25.676606       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 02 20:39:24 test-preload-234694 kubelet[1167]: I1202 20:39:24.904298    1167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6b6a7bea-1aa7-49a9-8858-3ca632d9e66f-tmp\") pod \"storage-provisioner\" (UID: \"6b6a7bea-1aa7-49a9-8858-3ca632d9e66f\") " pod="kube-system/storage-provisioner"
	Dec 02 20:39:24 test-preload-234694 kubelet[1167]: I1202 20:39:24.904313    1167 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db7a9e54-1339-431a-8aa5-1f84fab00a57-lib-modules\") pod \"kube-proxy-6kqv5\" (UID: \"db7a9e54-1339-431a-8aa5-1f84fab00a57\") " pod="kube-system/kube-proxy-6kqv5"
	Dec 02 20:39:24 test-preload-234694 kubelet[1167]: E1202 20:39:24.904669    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 02 20:39:24 test-preload-234694 kubelet[1167]: E1202 20:39:24.904737    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/614a15b2-aa56-4a03-bf8e-08149495e90e-config-volume podName:614a15b2-aa56-4a03-bf8e-08149495e90e nodeName:}" failed. No retries permitted until 2025-12-02 20:39:25.404717976 +0000 UTC m=+5.780718724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/614a15b2-aa56-4a03-bf8e-08149495e90e-config-volume") pod "coredns-66bc5c9577-zhc9c" (UID: "614a15b2-aa56-4a03-bf8e-08149495e90e") : object "kube-system"/"coredns" not registered
	Dec 02 20:39:24 test-preload-234694 kubelet[1167]: E1202 20:39:24.916966    1167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-234694\" already exists" pod="kube-system/kube-scheduler-test-preload-234694"
	Dec 02 20:39:24 test-preload-234694 kubelet[1167]: I1202 20:39:24.917005    1167 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-234694"
	Dec 02 20:39:24 test-preload-234694 kubelet[1167]: E1202 20:39:24.943341    1167 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-234694\" already exists" pod="kube-system/kube-apiserver-test-preload-234694"
	Dec 02 20:39:25 test-preload-234694 kubelet[1167]: E1202 20:39:25.408932    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 02 20:39:25 test-preload-234694 kubelet[1167]: E1202 20:39:25.409024    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/614a15b2-aa56-4a03-bf8e-08149495e90e-config-volume podName:614a15b2-aa56-4a03-bf8e-08149495e90e nodeName:}" failed. No retries permitted until 2025-12-02 20:39:26.409009692 +0000 UTC m=+6.785010452 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/614a15b2-aa56-4a03-bf8e-08149495e90e-config-volume") pod "coredns-66bc5c9577-zhc9c" (UID: "614a15b2-aa56-4a03-bf8e-08149495e90e") : object "kube-system"/"coredns" not registered
	Dec 02 20:39:25 test-preload-234694 kubelet[1167]: E1202 20:39:25.905677    1167 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Dec 02 20:39:25 test-preload-234694 kubelet[1167]: E1202 20:39:25.905774    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db7a9e54-1339-431a-8aa5-1f84fab00a57-kube-proxy podName:db7a9e54-1339-431a-8aa5-1f84fab00a57 nodeName:}" failed. No retries permitted until 2025-12-02 20:39:26.405758498 +0000 UTC m=+6.781759249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/db7a9e54-1339-431a-8aa5-1f84fab00a57-kube-proxy") pod "kube-proxy-6kqv5" (UID: "db7a9e54-1339-431a-8aa5-1f84fab00a57") : failed to sync configmap cache: timed out waiting for the condition
	Dec 02 20:39:26 test-preload-234694 kubelet[1167]: E1202 20:39:26.417556    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 02 20:39:26 test-preload-234694 kubelet[1167]: E1202 20:39:26.418094    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/614a15b2-aa56-4a03-bf8e-08149495e90e-config-volume podName:614a15b2-aa56-4a03-bf8e-08149495e90e nodeName:}" failed. No retries permitted until 2025-12-02 20:39:28.417875109 +0000 UTC m=+8.793875871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/614a15b2-aa56-4a03-bf8e-08149495e90e-config-volume") pod "coredns-66bc5c9577-zhc9c" (UID: "614a15b2-aa56-4a03-bf8e-08149495e90e") : object "kube-system"/"coredns" not registered
	Dec 02 20:39:26 test-preload-234694 kubelet[1167]: E1202 20:39:26.790426    1167 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-zhc9c" podUID="614a15b2-aa56-4a03-bf8e-08149495e90e"
	Dec 02 20:39:26 test-preload-234694 kubelet[1167]: I1202 20:39:26.801164    1167 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 02 20:39:26 test-preload-234694 kubelet[1167]: I1202 20:39:26.896581    1167 scope.go:117] "RemoveContainer" containerID="fef7a1acf29251db0e90590fa12a583e4d7d4733bfb5af07c80da5a6d075bcf9"
	Dec 02 20:39:27 test-preload-234694 kubelet[1167]: I1202 20:39:27.933005    1167 scope.go:117] "RemoveContainer" containerID="fef7a1acf29251db0e90590fa12a583e4d7d4733bfb5af07c80da5a6d075bcf9"
	Dec 02 20:39:27 test-preload-234694 kubelet[1167]: I1202 20:39:27.933436    1167 scope.go:117] "RemoveContainer" containerID="b91bd11e4d3c6a9ed80af2693aa7988f3aa06dcdba2b1d7c38a6cb5d90a8b2fe"
	Dec 02 20:39:27 test-preload-234694 kubelet[1167]: E1202 20:39:27.933665    1167 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6b6a7bea-1aa7-49a9-8858-3ca632d9e66f)\"" pod="kube-system/storage-provisioner" podUID="6b6a7bea-1aa7-49a9-8858-3ca632d9e66f"
	Dec 02 20:39:28 test-preload-234694 kubelet[1167]: E1202 20:39:28.437577    1167 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 02 20:39:28 test-preload-234694 kubelet[1167]: E1202 20:39:28.437659    1167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/614a15b2-aa56-4a03-bf8e-08149495e90e-config-volume podName:614a15b2-aa56-4a03-bf8e-08149495e90e nodeName:}" failed. No retries permitted until 2025-12-02 20:39:32.437642928 +0000 UTC m=+12.813643676 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/614a15b2-aa56-4a03-bf8e-08149495e90e-config-volume") pod "coredns-66bc5c9577-zhc9c" (UID: "614a15b2-aa56-4a03-bf8e-08149495e90e") : object "kube-system"/"coredns" not registered
	Dec 02 20:39:29 test-preload-234694 kubelet[1167]: E1202 20:39:29.817114    1167 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764707969816546249 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 02 20:39:29 test-preload-234694 kubelet[1167]: E1202 20:39:29.817136    1167 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764707969816546249 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 02 20:39:39 test-preload-234694 kubelet[1167]: E1202 20:39:39.821233    1167 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764707979820758012 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 02 20:39:39 test-preload-234694 kubelet[1167]: E1202 20:39:39.821255    1167 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764707979820758012 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [b91bd11e4d3c6a9ed80af2693aa7988f3aa06dcdba2b1d7c38a6cb5d90a8b2fe] <==
	I1202 20:39:27.226200       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1202 20:39:27.230222       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-234694 -n test-preload-234694
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-234694 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-234694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-234694
--- FAIL: TestPreload (113.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-892862 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-892862 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.808637659s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-892862] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-892862" primary control-plane node in "pause-892862" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-892862" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:47:30.163624  180709 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:47:30.163949  180709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:30.163966  180709 out.go:374] Setting ErrFile to fd 2...
	I1202 20:47:30.163973  180709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:30.164238  180709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:47:30.164729  180709 out.go:368] Setting JSON to false
	I1202 20:47:30.165625  180709 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8994,"bootTime":1764699456,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:47:30.165711  180709 start.go:143] virtualization: kvm guest
	I1202 20:47:30.167575  180709 out.go:179] * [pause-892862] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:47:30.168709  180709 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:47:30.168733  180709 notify.go:221] Checking for updates...
	I1202 20:47:30.171934  180709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:47:30.173149  180709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 20:47:30.174456  180709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 20:47:30.176184  180709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:47:30.177782  180709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:47:30.179496  180709 config.go:182] Loaded profile config "pause-892862": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:30.180059  180709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:47:30.219965  180709 out.go:179] * Using the kvm2 driver based on existing profile
	I1202 20:47:30.221042  180709 start.go:309] selected driver: kvm2
	I1202 20:47:30.221059  180709 start.go:927] validating driver "kvm2" against &{Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:30.221242  180709 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:47:30.222704  180709 cni.go:84] Creating CNI manager for ""
	I1202 20:47:30.222789  180709 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:47:30.222852  180709 start.go:353] cluster config:
	{Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-892862 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:30.223016  180709 iso.go:125] acquiring lock: {Name:mkfe4a75ba73b1e7a1c7cd55dc23a305917e17a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:47:30.226983  180709 out.go:179] * Starting "pause-892862" primary control-plane node in "pause-892862" cluster
	I1202 20:47:30.228778  180709 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:47:30.228833  180709 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:47:30.228846  180709 cache.go:65] Caching tarball of preloaded images
	I1202 20:47:30.228948  180709 preload.go:238] Found /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:47:30.228959  180709 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:47:30.229115  180709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/config.json ...
	I1202 20:47:30.229349  180709 start.go:360] acquireMachinesLock for pause-892862: {Name:mk87259b3368832a6a6ed41448f2ab0149793b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 20:47:32.926107  180709 start.go:364] duration metric: took 2.696699728s to acquireMachinesLock for "pause-892862"
	I1202 20:47:32.926168  180709 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:47:32.926189  180709 fix.go:54] fixHost starting: 
	I1202 20:47:32.928804  180709 fix.go:112] recreateIfNeeded on pause-892862: state=Running err=<nil>
	W1202 20:47:32.928835  180709 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 20:47:32.930871  180709 out.go:252] * Updating the running kvm2 "pause-892862" VM ...
	I1202 20:47:32.930918  180709 machine.go:94] provisionDockerMachine start ...
	I1202 20:47:32.935644  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:32.936189  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:32.936232  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:32.936462  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:32.936769  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:32.936786  180709 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:47:33.047808  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-892862
	
	I1202 20:47:33.047852  180709 buildroot.go:166] provisioning hostname "pause-892862"
	I1202 20:47:33.051443  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.052113  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.052158  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.052768  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:33.053004  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:33.053040  180709 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-892862 && echo "pause-892862" | sudo tee /etc/hostname
	I1202 20:47:33.185041  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-892862
	
	I1202 20:47:33.188630  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.189134  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.189175  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.189418  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:33.189681  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:33.189709  180709 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-892862' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-892862/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-892862' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:47:33.298881  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:47:33.298914  180709 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-143119/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-143119/.minikube}
	I1202 20:47:33.298941  180709 buildroot.go:174] setting up certificates
	I1202 20:47:33.298959  180709 provision.go:84] configureAuth start
	I1202 20:47:33.302402  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.302967  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.302999  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.305549  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.305961  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.305983  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.306161  180709 provision.go:143] copyHostCerts
	I1202 20:47:33.306220  180709 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem, removing ...
	I1202 20:47:33.306233  180709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem
	I1202 20:47:33.306318  180709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem (1123 bytes)
	I1202 20:47:33.306443  180709 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem, removing ...
	I1202 20:47:33.306453  180709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem
	I1202 20:47:33.306479  180709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem (1675 bytes)
	I1202 20:47:33.306565  180709 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem, removing ...
	I1202 20:47:33.306577  180709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem
	I1202 20:47:33.306609  180709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem (1082 bytes)
	I1202 20:47:33.306711  180709 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem org=jenkins.pause-892862 san=[127.0.0.1 192.168.39.176 localhost minikube pause-892862]
	I1202 20:47:33.378291  180709 provision.go:177] copyRemoteCerts
	I1202 20:47:33.378348  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:47:33.380736  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.381141  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.381167  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.381324  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:33.470745  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:47:33.504748  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 20:47:33.541137  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:47:33.579109  180709 provision.go:87] duration metric: took 280.127807ms to configureAuth
	I1202 20:47:33.579147  180709 buildroot.go:189] setting minikube options for container-runtime
	I1202 20:47:33.579375  180709 config.go:182] Loaded profile config "pause-892862": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:33.583108  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.583711  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.583741  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.583957  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:33.584207  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:33.584224  180709 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:47:39.214350  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:47:39.214378  180709 machine.go:97] duration metric: took 6.283447535s to provisionDockerMachine
	I1202 20:47:39.214393  180709 start.go:293] postStartSetup for "pause-892862" (driver="kvm2")
	I1202 20:47:39.214406  180709 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:47:39.214474  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:47:39.219158  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.219732  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.219770  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.220034  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:39.314156  180709 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:47:39.320511  180709 info.go:137] Remote host: Buildroot 2025.02
	I1202 20:47:39.320551  180709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/addons for local assets ...
	I1202 20:47:39.320667  180709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/files for local assets ...
	I1202 20:47:39.320779  180709 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem -> 1470702.pem in /etc/ssl/certs
	I1202 20:47:39.320906  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:47:39.340926  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:39.382551  180709 start.go:296] duration metric: took 168.137636ms for postStartSetup
	I1202 20:47:39.382618  180709 fix.go:56] duration metric: took 6.456440939s for fixHost
	I1202 20:47:39.386893  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.387430  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.387478  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.387794  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:39.388131  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:39.388152  180709 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1202 20:47:39.503084  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764708459.496621228
	
	I1202 20:47:39.503107  180709 fix.go:216] guest clock: 1764708459.496621228
	I1202 20:47:39.503116  180709 fix.go:229] Guest: 2025-12-02 20:47:39.496621228 +0000 UTC Remote: 2025-12-02 20:47:39.382625482 +0000 UTC m=+9.271396085 (delta=113.995746ms)
	I1202 20:47:39.503140  180709 fix.go:200] guest clock delta is within tolerance: 113.995746ms
	I1202 20:47:39.503147  180709 start.go:83] releasing machines lock for "pause-892862", held for 6.576997859s
	I1202 20:47:39.506571  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.507124  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.507156  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.507824  180709 ssh_runner.go:195] Run: cat /version.json
	I1202 20:47:39.507913  180709 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:47:39.511523  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.511852  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.512084  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.512119  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.512311  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:39.512328  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.512358  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.512566  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:39.599611  180709 ssh_runner.go:195] Run: systemctl --version
	I1202 20:47:39.639739  180709 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:47:39.801939  180709 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:47:39.813366  180709 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:47:39.813453  180709 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:47:39.825610  180709 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:47:39.825642  180709 start.go:496] detecting cgroup driver to use...
	I1202 20:47:39.825772  180709 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:47:39.851955  180709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:47:39.871192  180709 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:47:39.871265  180709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:47:39.893578  180709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:47:39.915897  180709 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:47:40.157168  180709 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:47:40.351772  180709 docker.go:234] disabling docker service ...
	I1202 20:47:40.351857  180709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:47:40.382162  180709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:47:40.400292  180709 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:47:40.619600  180709 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:47:40.818294  180709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:47:40.836375  180709 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:47:40.862872  180709 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:47:40.862953  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.876930  180709 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:47:40.877005  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.892088  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.905117  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.917965  180709 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:47:40.932792  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.945233  180709 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.959143  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.971613  180709 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:47:40.982500  180709 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:47:40.994339  180709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:41.169910  180709 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:47:41.484137  180709 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:47:41.484220  180709 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:47:41.489514  180709 start.go:564] Will wait 60s for crictl version
	I1202 20:47:41.489573  180709 ssh_runner.go:195] Run: which crictl
	I1202 20:47:41.493586  180709 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 20:47:41.525318  180709 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 20:47:41.525408  180709 ssh_runner.go:195] Run: crio --version
	I1202 20:47:41.556371  180709 ssh_runner.go:195] Run: crio --version
	I1202 20:47:41.587171  180709 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1202 20:47:41.591217  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:41.591703  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:41.591730  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:41.591928  180709 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 20:47:41.596712  180709 kubeadm.go:884] updating cluster {Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:47:41.596857  180709 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:47:41.596919  180709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:41.640327  180709 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:47:41.640362  180709 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:47:41.640430  180709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:41.679399  180709 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:47:41.679421  180709 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:47:41.679428  180709 kubeadm.go:935] updating node { 192.168.39.176 8443 v1.34.2 crio true true} ...
	I1202 20:47:41.679522  180709 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-892862 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:47:41.679586  180709 ssh_runner.go:195] Run: crio config
	I1202 20:47:41.728823  180709 cni.go:84] Creating CNI manager for ""
	I1202 20:47:41.728896  180709 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:47:41.728935  180709 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:47:41.728988  180709 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-892862 NodeName:pause-892862 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:47:41.729271  180709 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-892862"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:47:41.729355  180709 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:47:41.744415  180709 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:47:41.744505  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:47:41.758529  180709 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1202 20:47:41.787792  180709 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:47:41.811286  180709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1202 20:47:41.832600  180709 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I1202 20:47:41.836814  180709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:42.006895  180709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:47:42.027123  180709 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862 for IP: 192.168.39.176
	I1202 20:47:42.027153  180709 certs.go:195] generating shared ca certs ...
	I1202 20:47:42.027177  180709 certs.go:227] acquiring lock for ca certs: {Name:mk4d0a32f0604330372f61cbe35af2ea6f3b6c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:42.027375  180709 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key
	I1202 20:47:42.027422  180709 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key
	I1202 20:47:42.027429  180709 certs.go:257] generating profile certs ...
	I1202 20:47:42.027518  180709 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/client.key
	I1202 20:47:42.027573  180709 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/apiserver.key.c6c045af
	I1202 20:47:42.027608  180709 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/proxy-client.key
	I1202 20:47:42.027757  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem (1338 bytes)
	W1202 20:47:42.027788  180709 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070_empty.pem, impossibly tiny 0 bytes
	I1202 20:47:42.027794  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:47:42.027818  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:47:42.027840  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:47:42.027867  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem (1675 bytes)
	I1202 20:47:42.027933  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:42.028560  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:47:42.065172  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:47:42.098807  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:47:42.133019  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:47:42.169561  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 20:47:42.204528  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:47:42.246305  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:47:42.361413  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:47:42.430775  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:47:42.493156  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem --> /usr/share/ca-certificates/147070.pem (1338 bytes)
	I1202 20:47:42.571562  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /usr/share/ca-certificates/1470702.pem (1708 bytes)
	I1202 20:47:42.666413  180709 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:47:42.756478  180709 ssh_runner.go:195] Run: openssl version
	I1202 20:47:42.770080  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:47:42.796346  180709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:42.806695  180709 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:45 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:42.806795  180709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:42.822001  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:47:42.850457  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147070.pem && ln -fs /usr/share/ca-certificates/147070.pem /etc/ssl/certs/147070.pem"
	I1202 20:47:42.874641  180709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147070.pem
	I1202 20:47:42.890747  180709 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:57 /usr/share/ca-certificates/147070.pem
	I1202 20:47:42.890825  180709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147070.pem
	I1202 20:47:42.904269  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/147070.pem /etc/ssl/certs/51391683.0"
	I1202 20:47:42.931175  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470702.pem && ln -fs /usr/share/ca-certificates/1470702.pem /etc/ssl/certs/1470702.pem"
	I1202 20:47:42.979186  180709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470702.pem
	I1202 20:47:42.999319  180709 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:57 /usr/share/ca-certificates/1470702.pem
	I1202 20:47:42.999403  180709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470702.pem
	I1202 20:47:43.013677  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1470702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:47:43.040096  180709 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:47:43.057377  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:47:43.073530  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:47:43.088015  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:47:43.103880  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:47:43.115613  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:47:43.126096  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:47:43.137582  180709 kubeadm.go:401] StartCluster: {Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:43.137716  180709 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:47:43.137772  180709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:47:43.241476  180709 cri.go:89] found id: "5ae303b8cd9929f78e5e243a00749433f3a28f47ff958be9b1bd42b8690a0a4f"
	I1202 20:47:43.241501  180709 cri.go:89] found id: "5ab2b69576bd7ddc7b9834385c7475de654c082386d224fb389bea4e68f3c384"
	I1202 20:47:43.241506  180709 cri.go:89] found id: "063bdc2f4044d47e558e971c8d8742aec22bec89182a48c14bc4dc181c60a531"
	I1202 20:47:43.241510  180709 cri.go:89] found id: "9bbfedda04a70bdbc59f66ca20322b7bf1717ad77a590bbc7c2ce4242714ec5c"
	I1202 20:47:43.241514  180709 cri.go:89] found id: "c3ba4033625655ee75b7cdd32c8895e62e5f26321e371238b33d804ab1138926"
	I1202 20:47:43.241518  180709 cri.go:89] found id: "4eb3b7ec4b7d853bf9eb9a01676c24007457097a629f779a01fc49110e7cc47d"
	I1202 20:47:43.241523  180709 cri.go:89] found id: "7a076c19ae69f444d8beaca6206d51a7ea8266bb0ac74b038fb2531b733b0ed1"
	I1202 20:47:43.241527  180709 cri.go:89] found id: "bdb1b64ca24e08df0dda142abb2f57874f9cda21c9400ad109b3980d49353290"
	I1202 20:47:43.241531  180709 cri.go:89] found id: ""
	I1202 20:47:43.241581  180709 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-892862 -n pause-892862
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-892862 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-892862 logs -n 25: (1.497706906s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-019279 sudo cat /etc/containerd/config.toml                                                                                                       │ cilium-019279             │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-019279 sudo containerd config dump                                                                                                                │ cilium-019279             │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-019279 sudo systemctl status crio --all --full --no-pager                                                                                         │ cilium-019279             │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-019279 sudo systemctl cat crio --no-pager                                                                                                         │ cilium-019279             │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-019279 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                               │ cilium-019279             │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │                     │
	│ ssh     │ -p cilium-019279 sudo crio config                                                                                                                           │ cilium-019279             │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │                     │
	│ delete  │ -p cilium-019279                                                                                                                                            │ cilium-019279             │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:45 UTC │
	│ start   │ -p stopped-upgrade-225043 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ stopped-upgrade-225043    │ jenkins │ v1.35.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-095611 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-095611    │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:46 UTC │
	│ start   │ -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-950537 │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │                     │
	│ start   │ -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio               │ kubernetes-upgrade-950537 │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:46 UTC │
	│ stop    │ stopped-upgrade-225043 stop                                                                                                                                 │ stopped-upgrade-225043    │ jenkins │ v1.35.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:45 UTC │
	│ start   │ -p stopped-upgrade-225043 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-225043    │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:46 UTC │
	│ delete  │ -p cert-expiration-095611                                                                                                                                   │ cert-expiration-095611    │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:46 UTC │
	│ start   │ -p guest-856307 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-856307              │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:46 UTC │
	│ delete  │ -p kubernetes-upgrade-950537                                                                                                                                │ kubernetes-upgrade-950537 │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:46 UTC │
	│ start   │ -p pause-892862 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-892862              │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:47 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-225043 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-225043    │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │                     │
	│ delete  │ -p stopped-upgrade-225043                                                                                                                                   │ stopped-upgrade-225043    │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:46 UTC │
	│ start   │ -p auto-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:47 UTC │
	│ start   │ -p kindnet-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-019279            │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │                     │
	│ start   │ -p pause-892862 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-892862              │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 pgrep -a kubelet                                                                                                                             │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │ 02 Dec 25 20:47 UTC │
	│ ssh     │ -p auto-019279 sudo cat /etc/nsswitch.conf                                                                                                                  │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 sudo cat /etc/resolv.conf                                                                                                                    │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:47:30
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:47:30.163624  180709 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:47:30.163949  180709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:30.163966  180709 out.go:374] Setting ErrFile to fd 2...
	I1202 20:47:30.163973  180709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:30.164238  180709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:47:30.164729  180709 out.go:368] Setting JSON to false
	I1202 20:47:30.165625  180709 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8994,"bootTime":1764699456,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:47:30.165711  180709 start.go:143] virtualization: kvm guest
	I1202 20:47:30.167575  180709 out.go:179] * [pause-892862] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:47:30.168709  180709 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:47:30.168733  180709 notify.go:221] Checking for updates...
	I1202 20:47:30.171934  180709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:47:30.173149  180709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 20:47:30.174456  180709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 20:47:30.176184  180709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:47:30.177782  180709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:47:30.179496  180709 config.go:182] Loaded profile config "pause-892862": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:30.180059  180709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:47:30.219965  180709 out.go:179] * Using the kvm2 driver based on existing profile
	I1202 20:47:30.221042  180709 start.go:309] selected driver: kvm2
	I1202 20:47:30.221059  180709 start.go:927] validating driver "kvm2" against &{Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:30.221242  180709 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:47:30.222704  180709 cni.go:84] Creating CNI manager for ""
	I1202 20:47:30.222789  180709 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:47:30.222852  180709 start.go:353] cluster config:
	{Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-892862 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:30.223016  180709 iso.go:125] acquiring lock: {Name:mkfe4a75ba73b1e7a1c7cd55dc23a305917e17a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:47:30.226983  180709 out.go:179] * Starting "pause-892862" primary control-plane node in "pause-892862" cluster
	I1202 20:47:31.094777  179993 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503160671s
	I1202 20:47:31.116228  179993 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:47:31.131202  179993 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:47:31.149434  179993 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:47:31.149709  179993 kubeadm.go:319] [mark-control-plane] Marking the node auto-019279 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:47:31.161563  179993 kubeadm.go:319] [bootstrap-token] Using token: 78nlm6.i9hh1cmbbamz8gh4
	I1202 20:47:31.163631  179993 out.go:252]   - Configuring RBAC rules ...
	I1202 20:47:31.163835  179993 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:47:31.171768  179993 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:47:31.179155  179993 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:47:31.182233  179993 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:47:31.186717  179993 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:47:31.191222  179993 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:47:31.503295  179993 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:47:31.969560  179993 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:47:32.502187  179993 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:47:32.503125  179993 kubeadm.go:319] 
	I1202 20:47:32.503309  179993 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:47:32.503333  179993 kubeadm.go:319] 
	I1202 20:47:32.503458  179993 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:47:32.503482  179993 kubeadm.go:319] 
	I1202 20:47:32.503517  179993 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:47:32.503601  179993 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:47:32.503694  179993 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:47:32.503705  179993 kubeadm.go:319] 
	I1202 20:47:32.503788  179993 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:47:32.503808  179993 kubeadm.go:319] 
	I1202 20:47:32.503866  179993 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:47:32.503875  179993 kubeadm.go:319] 
	I1202 20:47:32.503950  179993 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:47:32.504067  179993 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:47:32.504171  179993 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:47:32.504183  179993 kubeadm.go:319] 
	I1202 20:47:32.504286  179993 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:47:32.504389  179993 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:47:32.504412  179993 kubeadm.go:319] 
	I1202 20:47:32.504526  179993 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 78nlm6.i9hh1cmbbamz8gh4 \
	I1202 20:47:32.504703  179993 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:164b9536bcfe41c4174c32548d219b78812180977735903d1dc928867094e350 \
	I1202 20:47:32.504742  179993 kubeadm.go:319] 	--control-plane 
	I1202 20:47:32.504755  179993 kubeadm.go:319] 
	I1202 20:47:32.504879  179993 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:47:32.504890  179993 kubeadm.go:319] 
	I1202 20:47:32.504983  179993 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 78nlm6.i9hh1cmbbamz8gh4 \
	I1202 20:47:32.505161  179993 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:164b9536bcfe41c4174c32548d219b78812180977735903d1dc928867094e350 
	I1202 20:47:32.506350  179993 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:47:32.506377  179993 cni.go:84] Creating CNI manager for ""
	I1202 20:47:32.506388  179993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:47:32.508249  179993 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 20:47:30.228778  180709 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:47:30.228833  180709 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:47:30.228846  180709 cache.go:65] Caching tarball of preloaded images
	I1202 20:47:30.228948  180709 preload.go:238] Found /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:47:30.228959  180709 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:47:30.229115  180709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/config.json ...
	I1202 20:47:30.229349  180709 start.go:360] acquireMachinesLock for pause-892862: {Name:mk87259b3368832a6a6ed41448f2ab0149793b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 20:47:32.926107  180709 start.go:364] duration metric: took 2.696699728s to acquireMachinesLock for "pause-892862"
	I1202 20:47:32.926168  180709 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:47:32.926189  180709 fix.go:54] fixHost starting: 
	I1202 20:47:32.928804  180709 fix.go:112] recreateIfNeeded on pause-892862: state=Running err=<nil>
	W1202 20:47:32.928835  180709 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 20:47:30.742110  176698 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I1202 20:47:30.743070  176698 api_server.go:269] stopped: https://192.168.72.13:8443/healthz: Get "https://192.168.72.13:8443/healthz": dial tcp 192.168.72.13:8443: connect: connection refused
	I1202 20:47:30.743141  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:47:30.743193  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:47:30.785582  176698 cri.go:89] found id: "2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:30.785613  176698 cri.go:89] found id: ""
	I1202 20:47:30.785627  176698 logs.go:282] 1 containers: [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf]
	I1202 20:47:30.785722  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.790405  176698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:47:30.790476  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:47:30.833179  176698 cri.go:89] found id: "5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:30.833205  176698 cri.go:89] found id: ""
	I1202 20:47:30.833216  176698 logs.go:282] 1 containers: [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590]
	I1202 20:47:30.833282  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.837789  176698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:47:30.837876  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:47:30.879081  176698 cri.go:89] found id: "eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:30.879117  176698 cri.go:89] found id: "130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:30.879124  176698 cri.go:89] found id: ""
	I1202 20:47:30.879135  176698 logs.go:282] 2 containers: [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0]
	I1202 20:47:30.879218  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.884605  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.889809  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:47:30.889895  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:47:30.933181  176698 cri.go:89] found id: "c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:30.933207  176698 cri.go:89] found id: "0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:30.933212  176698 cri.go:89] found id: ""
	I1202 20:47:30.933220  176698 logs.go:282] 2 containers: [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1]
	I1202 20:47:30.933276  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.937670  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.942933  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:47:30.943017  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:47:30.987424  176698 cri.go:89] found id: "5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:30.987454  176698 cri.go:89] found id: ""
	I1202 20:47:30.987464  176698 logs.go:282] 1 containers: [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e]
	I1202 20:47:30.987534  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.991841  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:47:30.991921  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:47:31.027277  176698 cri.go:89] found id: "44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:31.027309  176698 cri.go:89] found id: ""
	I1202 20:47:31.027321  176698 logs.go:282] 1 containers: [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655]
	I1202 20:47:31.027393  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:31.032207  176698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:47:31.032295  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:47:31.075508  176698 cri.go:89] found id: ""
	I1202 20:47:31.075540  176698 logs.go:282] 0 containers: []
	W1202 20:47:31.075552  176698 logs.go:284] No container was found matching "kindnet"
	I1202 20:47:31.075560  176698 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:47:31.075636  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:47:31.115010  176698 cri.go:89] found id: "2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:31.115038  176698 cri.go:89] found id: ""
	I1202 20:47:31.115050  176698 logs.go:282] 1 containers: [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395]
	I1202 20:47:31.115119  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:31.120648  176698 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:47:31.120697  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:47:31.193602  176698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:47:31.193627  176698 logs.go:123] Gathering logs for kube-apiserver [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf] ...
	I1202 20:47:31.193651  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:31.235972  176698 logs.go:123] Gathering logs for etcd [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590] ...
	I1202 20:47:31.236012  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:31.286758  176698 logs.go:123] Gathering logs for coredns [130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0] ...
	I1202 20:47:31.286800  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:31.327870  176698 logs.go:123] Gathering logs for kube-scheduler [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400] ...
	I1202 20:47:31.327900  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:31.412866  176698 logs.go:123] Gathering logs for kube-scheduler [0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1] ...
	I1202 20:47:31.412911  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:31.451014  176698 logs.go:123] Gathering logs for kube-controller-manager [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655] ...
	I1202 20:47:31.451067  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:31.492259  176698 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:47:31.492290  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:47:31.963139  176698 logs.go:123] Gathering logs for kubelet ...
	I1202 20:47:31.963180  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:47:32.070242  176698 logs.go:123] Gathering logs for dmesg ...
	I1202 20:47:32.070323  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:47:32.090339  176698 logs.go:123] Gathering logs for coredns [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4] ...
	I1202 20:47:32.090374  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:32.141380  176698 logs.go:123] Gathering logs for kube-proxy [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e] ...
	I1202 20:47:32.141425  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:32.183760  176698 logs.go:123] Gathering logs for storage-provisioner [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395] ...
	I1202 20:47:32.183795  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:32.226518  176698 logs.go:123] Gathering logs for container status ...
	I1202 20:47:32.226548  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:47:31.240112  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.240928  180134 main.go:143] libmachine: domain kindnet-019279 has current primary IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.240956  180134 main.go:143] libmachine: found domain IP: 192.168.83.176
	I1202 20:47:31.240967  180134 main.go:143] libmachine: reserving static IP address...
	I1202 20:47:31.241570  180134 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kindnet-019279", mac: "52:54:00:48:bb:87", ip: "192.168.83.176"} in network mk-kindnet-019279
	I1202 20:47:31.511193  180134 main.go:143] libmachine: reserved static IP address 192.168.83.176 for domain kindnet-019279
	I1202 20:47:31.511219  180134 main.go:143] libmachine: waiting for SSH...
	I1202 20:47:31.511228  180134 main.go:143] libmachine: Getting to WaitForSSH function...
	I1202 20:47:31.515342  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.516101  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:31.516146  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.516381  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:31.516753  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:31.516772  180134 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1202 20:47:31.655881  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:47:31.656310  180134 main.go:143] libmachine: domain creation complete
	I1202 20:47:31.658450  180134 machine.go:94] provisionDockerMachine start ...
	I1202 20:47:31.661350  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.661904  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:31.661942  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.662166  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:31.662389  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:31.662402  180134 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:47:31.782896  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 20:47:31.782929  180134 buildroot.go:166] provisioning hostname "kindnet-019279"
	I1202 20:47:31.787013  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.787537  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:31.787563  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.787798  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:31.788097  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:31.788113  180134 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-019279 && echo "kindnet-019279" | sudo tee /etc/hostname
	I1202 20:47:31.931616  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-019279
	
	I1202 20:47:31.935061  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.935613  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:31.935652  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.935868  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:31.936164  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:31.936187  180134 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-019279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-019279/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-019279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:47:32.069533  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:47:32.069574  180134 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-143119/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-143119/.minikube}
	I1202 20:47:32.069607  180134 buildroot.go:174] setting up certificates
	I1202 20:47:32.069619  180134 provision.go:84] configureAuth start
	I1202 20:47:32.073872  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.074454  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.074506  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.077822  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.078380  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.078443  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.078626  180134 provision.go:143] copyHostCerts
	I1202 20:47:32.078700  180134 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem, removing ...
	I1202 20:47:32.078716  180134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem
	I1202 20:47:32.078810  180134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem (1082 bytes)
	I1202 20:47:32.078959  180134 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem, removing ...
	I1202 20:47:32.078977  180134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem
	I1202 20:47:32.079030  180134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem (1123 bytes)
	I1202 20:47:32.079134  180134 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem, removing ...
	I1202 20:47:32.079149  180134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem
	I1202 20:47:32.079194  180134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem (1675 bytes)
	I1202 20:47:32.079274  180134 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem org=jenkins.kindnet-019279 san=[127.0.0.1 192.168.83.176 kindnet-019279 localhost minikube]
	I1202 20:47:32.185213  180134 provision.go:177] copyRemoteCerts
	I1202 20:47:32.185281  180134 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:47:32.189047  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.189543  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.189574  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.189828  180134 sshutil.go:53] new ssh client: &{IP:192.168.83.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/kindnet-019279/id_rsa Username:docker}
	I1202 20:47:32.282915  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1202 20:47:32.313619  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 20:47:32.345879  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:47:32.376338  180134 provision.go:87] duration metric: took 306.701427ms to configureAuth
	I1202 20:47:32.376370  180134 buildroot.go:189] setting minikube options for container-runtime
	I1202 20:47:32.376588  180134 config.go:182] Loaded profile config "kindnet-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:32.379586  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.380091  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.380121  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.380359  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:32.380638  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:32.380667  180134 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:47:32.653857  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:47:32.653899  180134 machine.go:97] duration metric: took 995.42366ms to provisionDockerMachine
	I1202 20:47:32.653916  180134 client.go:176] duration metric: took 19.961397793s to LocalClient.Create
	I1202 20:47:32.653941  180134 start.go:167] duration metric: took 19.961478847s to libmachine.API.Create "kindnet-019279"
	I1202 20:47:32.653950  180134 start.go:293] postStartSetup for "kindnet-019279" (driver="kvm2")
	I1202 20:47:32.653962  180134 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:47:32.654036  180134 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:47:32.656850  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.657288  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.657315  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.657468  180134 sshutil.go:53] new ssh client: &{IP:192.168.83.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/kindnet-019279/id_rsa Username:docker}
	I1202 20:47:32.748696  180134 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:47:32.754455  180134 info.go:137] Remote host: Buildroot 2025.02
	I1202 20:47:32.754491  180134 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/addons for local assets ...
	I1202 20:47:32.754565  180134 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/files for local assets ...
	I1202 20:47:32.754645  180134 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem -> 1470702.pem in /etc/ssl/certs
	I1202 20:47:32.754753  180134 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:47:32.767715  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:32.799230  180134 start.go:296] duration metric: took 145.264325ms for postStartSetup
	I1202 20:47:32.803168  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.803580  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.803618  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.803995  180134 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/config.json ...
	I1202 20:47:32.804308  180134 start.go:128] duration metric: took 20.113901821s to createHost
	I1202 20:47:32.806684  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.807026  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.807054  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.807245  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:32.807524  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:32.807541  180134 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1202 20:47:32.925909  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764708452.887389419
	
	I1202 20:47:32.925937  180134 fix.go:216] guest clock: 1764708452.887389419
	I1202 20:47:32.925948  180134 fix.go:229] Guest: 2025-12-02 20:47:32.887389419 +0000 UTC Remote: 2025-12-02 20:47:32.804326988 +0000 UTC m=+59.344015853 (delta=83.062431ms)
	I1202 20:47:32.925968  180134 fix.go:200] guest clock delta is within tolerance: 83.062431ms
	I1202 20:47:32.925973  180134 start.go:83] releasing machines lock for "kindnet-019279", held for 20.235712607s
	I1202 20:47:32.929648  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.930339  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.930371  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.930999  180134 ssh_runner.go:195] Run: cat /version.json
	I1202 20:47:32.931134  180134 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:47:32.935840  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.935859  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.936323  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.936343  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.936363  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.936375  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.936706  180134 sshutil.go:53] new ssh client: &{IP:192.168.83.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/kindnet-019279/id_rsa Username:docker}
	I1202 20:47:32.936728  180134 sshutil.go:53] new ssh client: &{IP:192.168.83.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/kindnet-019279/id_rsa Username:docker}
	I1202 20:47:33.054348  180134 ssh_runner.go:195] Run: systemctl --version
	I1202 20:47:33.061280  180134 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:47:33.232186  180134 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:47:33.240876  180134 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:47:33.240945  180134 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:47:33.263136  180134 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:47:33.263166  180134 start.go:496] detecting cgroup driver to use...
	I1202 20:47:33.263280  180134 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:47:33.283484  180134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:47:33.304395  180134 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:47:33.304467  180134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:47:33.324506  180134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:47:33.342436  180134 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:47:33.507041  180134 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:47:33.748198  180134 docker.go:234] disabling docker service ...
	I1202 20:47:33.748275  180134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:47:33.771105  180134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:47:33.791884  180134 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:47:33.953006  180134 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:47:34.109132  180134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:47:34.124797  180134 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:47:34.147513  180134 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:47:34.147577  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.160317  180134 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:47:34.160383  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.173150  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.185560  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.198867  180134 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:47:34.211859  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.224336  180134 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.245263  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.260844  180134 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:47:34.276504  180134 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 20:47:34.276569  180134 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 20:47:34.304585  180134 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:47:34.317393  180134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:34.459892  180134 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:47:34.576296  180134 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:47:34.576390  180134 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:47:34.582102  180134 start.go:564] Will wait 60s for crictl version
	I1202 20:47:34.582172  180134 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.586541  180134 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 20:47:34.621136  180134 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 20:47:34.621211  180134 ssh_runner.go:195] Run: crio --version
	I1202 20:47:34.654121  180134 ssh_runner.go:195] Run: crio --version
	I1202 20:47:34.684829  180134 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1202 20:47:32.509797  179993 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 20:47:32.524633  179993 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 20:47:32.556241  179993 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:47:32.556317  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:32.556365  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-019279 minikube.k8s.io/updated_at=2025_12_02T20_47_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=auto-019279 minikube.k8s.io/primary=true
	I1202 20:47:32.602552  179993 ops.go:34] apiserver oom_adj: -16
	I1202 20:47:32.748081  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:33.248918  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:33.748360  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:34.248791  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:34.748677  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:32.930871  180709 out.go:252] * Updating the running kvm2 "pause-892862" VM ...
	I1202 20:47:32.930918  180709 machine.go:94] provisionDockerMachine start ...
	I1202 20:47:32.935644  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:32.936189  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:32.936232  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:32.936462  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:32.936769  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:32.936786  180709 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:47:33.047808  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-892862
	
	I1202 20:47:33.047852  180709 buildroot.go:166] provisioning hostname "pause-892862"
	I1202 20:47:33.051443  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.052113  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.052158  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.052768  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:33.053004  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:33.053040  180709 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-892862 && echo "pause-892862" | sudo tee /etc/hostname
	I1202 20:47:33.185041  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-892862
	
	I1202 20:47:33.188630  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.189134  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.189175  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.189418  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:33.189681  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:33.189709  180709 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-892862' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-892862/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-892862' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:47:33.298881  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:47:33.298914  180709 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-143119/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-143119/.minikube}
	I1202 20:47:33.298941  180709 buildroot.go:174] setting up certificates
	I1202 20:47:33.298959  180709 provision.go:84] configureAuth start
	I1202 20:47:33.302402  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.302967  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.302999  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.305549  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.305961  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.305983  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.306161  180709 provision.go:143] copyHostCerts
	I1202 20:47:33.306220  180709 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem, removing ...
	I1202 20:47:33.306233  180709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem
	I1202 20:47:33.306318  180709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem (1123 bytes)
	I1202 20:47:33.306443  180709 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem, removing ...
	I1202 20:47:33.306453  180709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem
	I1202 20:47:33.306479  180709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem (1675 bytes)
	I1202 20:47:33.306565  180709 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem, removing ...
	I1202 20:47:33.306577  180709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem
	I1202 20:47:33.306609  180709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem (1082 bytes)
	I1202 20:47:33.306711  180709 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem org=jenkins.pause-892862 san=[127.0.0.1 192.168.39.176 localhost minikube pause-892862]
	I1202 20:47:33.378291  180709 provision.go:177] copyRemoteCerts
	I1202 20:47:33.378348  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:47:33.380736  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.381141  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.381167  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.381324  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:33.470745  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:47:33.504748  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 20:47:33.541137  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:47:33.579109  180709 provision.go:87] duration metric: took 280.127807ms to configureAuth
	I1202 20:47:33.579147  180709 buildroot.go:189] setting minikube options for container-runtime
	I1202 20:47:33.579375  180709 config.go:182] Loaded profile config "pause-892862": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:33.583108  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.583711  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.583741  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.583957  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:33.584207  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:33.584224  180709 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:47:35.248415  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:35.748738  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:36.248890  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:36.372059  179993 kubeadm.go:1114] duration metric: took 3.815824768s to wait for elevateKubeSystemPrivileges
	I1202 20:47:36.372102  179993 kubeadm.go:403] duration metric: took 17.347266589s to StartCluster
	I1202 20:47:36.372124  179993 settings.go:142] acquiring lock: {Name:mka4c337368f188b532e41dc38505f24fc351556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:36.372219  179993 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 20:47:36.373645  179993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/kubeconfig: {Name:mk45f2610791f17b0d78039ad0468591c7331759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:36.374002  179993 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:47:36.374004  179993 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:47:36.374108  179993 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:47:36.374263  179993 addons.go:70] Setting storage-provisioner=true in profile "auto-019279"
	I1202 20:47:36.374288  179993 addons.go:239] Setting addon storage-provisioner=true in "auto-019279"
	I1202 20:47:36.374286  179993 config.go:182] Loaded profile config "auto-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:36.374328  179993 host.go:66] Checking if "auto-019279" exists ...
	I1202 20:47:36.374353  179993 addons.go:70] Setting default-storageclass=true in profile "auto-019279"
	I1202 20:47:36.374376  179993 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-019279"
	I1202 20:47:36.376150  179993 out.go:179] * Verifying Kubernetes components...
	I1202 20:47:36.377527  179993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:36.377624  179993 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:47:36.378770  179993 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:47:36.378790  179993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:47:36.379541  179993 addons.go:239] Setting addon default-storageclass=true in "auto-019279"
	I1202 20:47:36.379586  179993 host.go:66] Checking if "auto-019279" exists ...
	I1202 20:47:36.382171  179993 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:47:36.382194  179993 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:47:36.382923  179993 main.go:143] libmachine: domain auto-019279 has defined MAC address 52:54:00:24:7d:16 in network mk-auto-019279
	I1202 20:47:36.383757  179993 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:7d:16", ip: ""} in network mk-auto-019279: {Iface:virbr3 ExpiryTime:2025-12-02 21:47:07 +0000 UTC Type:0 Mac:52:54:00:24:7d:16 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:auto-019279 Clientid:01:52:54:00:24:7d:16}
	I1202 20:47:36.383793  179993 main.go:143] libmachine: domain auto-019279 has defined IP address 192.168.61.205 and MAC address 52:54:00:24:7d:16 in network mk-auto-019279
	I1202 20:47:36.384104  179993 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/auto-019279/id_rsa Username:docker}
	I1202 20:47:36.386652  179993 main.go:143] libmachine: domain auto-019279 has defined MAC address 52:54:00:24:7d:16 in network mk-auto-019279
	I1202 20:47:36.387337  179993 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:7d:16", ip: ""} in network mk-auto-019279: {Iface:virbr3 ExpiryTime:2025-12-02 21:47:07 +0000 UTC Type:0 Mac:52:54:00:24:7d:16 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:auto-019279 Clientid:01:52:54:00:24:7d:16}
	I1202 20:47:36.387381  179993 main.go:143] libmachine: domain auto-019279 has defined IP address 192.168.61.205 and MAC address 52:54:00:24:7d:16 in network mk-auto-019279
	I1202 20:47:36.387588  179993 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/auto-019279/id_rsa Username:docker}
	I1202 20:47:36.571641  179993 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:47:36.684683  179993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:47:36.875873  179993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:47:36.920118  179993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:47:37.440753  179993 start.go:977] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1202 20:47:37.443756  179993 node_ready.go:35] waiting up to 15m0s for node "auto-019279" to be "Ready" ...
	I1202 20:47:37.473156  179993 node_ready.go:49] node "auto-019279" is "Ready"
	I1202 20:47:37.473189  179993 node_ready.go:38] duration metric: took 29.40163ms for node "auto-019279" to be "Ready" ...
	I1202 20:47:37.473203  179993 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:47:37.473255  179993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:47:37.949703  179993 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-019279" context rescaled to 1 replicas
	I1202 20:47:38.097048  179993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.221126786s)
	I1202 20:47:38.097095  179993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.176941722s)
	I1202 20:47:38.097176  179993 api_server.go:72] duration metric: took 1.723134624s to wait for apiserver process to appear ...
	I1202 20:47:38.097217  179993 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:47:38.097240  179993 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I1202 20:47:38.114469  179993 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I1202 20:47:38.118999  179993 api_server.go:141] control plane version: v1.34.2
	I1202 20:47:38.119032  179993 api_server.go:131] duration metric: took 21.805987ms to wait for apiserver health ...
	I1202 20:47:38.119128  179993 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:47:38.120739  179993 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 20:47:34.774189  176698 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I1202 20:47:34.775043  176698 api_server.go:269] stopped: https://192.168.72.13:8443/healthz: Get "https://192.168.72.13:8443/healthz": dial tcp 192.168.72.13:8443: connect: connection refused
	I1202 20:47:34.775116  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:47:34.775192  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:47:34.818474  176698 cri.go:89] found id: "2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:34.818504  176698 cri.go:89] found id: ""
	I1202 20:47:34.818515  176698 logs.go:282] 1 containers: [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf]
	I1202 20:47:34.818584  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.822986  176698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:47:34.823088  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:47:34.874618  176698 cri.go:89] found id: "5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:34.874651  176698 cri.go:89] found id: ""
	I1202 20:47:34.874681  176698 logs.go:282] 1 containers: [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590]
	I1202 20:47:34.874765  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.879383  176698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:47:34.879459  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:47:34.922937  176698 cri.go:89] found id: "eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:34.922964  176698 cri.go:89] found id: "130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:34.922971  176698 cri.go:89] found id: ""
	I1202 20:47:34.922982  176698 logs.go:282] 2 containers: [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0]
	I1202 20:47:34.923055  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.928248  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.933324  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:47:34.933402  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:47:34.978298  176698 cri.go:89] found id: "c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:34.978323  176698 cri.go:89] found id: "0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:34.978330  176698 cri.go:89] found id: ""
	I1202 20:47:34.978340  176698 logs.go:282] 2 containers: [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1]
	I1202 20:47:34.978410  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.983977  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.988784  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:47:34.988859  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:47:35.036448  176698 cri.go:89] found id: "5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:35.036480  176698 cri.go:89] found id: ""
	I1202 20:47:35.036496  176698 logs.go:282] 1 containers: [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e]
	I1202 20:47:35.036568  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:35.041667  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:47:35.041749  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:47:35.099760  176698 cri.go:89] found id: "44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:35.099790  176698 cri.go:89] found id: ""
	I1202 20:47:35.099801  176698 logs.go:282] 1 containers: [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655]
	I1202 20:47:35.099883  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:35.106030  176698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:47:35.106123  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:47:35.157596  176698 cri.go:89] found id: ""
	I1202 20:47:35.157633  176698 logs.go:282] 0 containers: []
	W1202 20:47:35.157646  176698 logs.go:284] No container was found matching "kindnet"
	I1202 20:47:35.157685  176698 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:47:35.157763  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:47:35.207306  176698 cri.go:89] found id: "2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:35.207347  176698 cri.go:89] found id: ""
	I1202 20:47:35.207360  176698 logs.go:282] 1 containers: [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395]
	I1202 20:47:35.207445  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:35.213574  176698 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:47:35.213611  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:47:35.317251  176698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:47:35.317331  176698 logs.go:123] Gathering logs for kube-apiserver [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf] ...
	I1202 20:47:35.317352  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:35.377824  176698 logs.go:123] Gathering logs for etcd [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590] ...
	I1202 20:47:35.377865  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:35.433125  176698 logs.go:123] Gathering logs for coredns [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4] ...
	I1202 20:47:35.433167  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:35.495814  176698 logs.go:123] Gathering logs for kube-scheduler [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400] ...
	I1202 20:47:35.495858  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:35.611102  176698 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:47:35.611154  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:47:36.137070  176698 logs.go:123] Gathering logs for container status ...
	I1202 20:47:36.137111  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:47:36.189784  176698 logs.go:123] Gathering logs for kubelet ...
	I1202 20:47:36.189831  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:47:36.283354  176698 logs.go:123] Gathering logs for dmesg ...
	I1202 20:47:36.283395  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:47:36.302621  176698 logs.go:123] Gathering logs for coredns [130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0] ...
	I1202 20:47:36.302669  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:36.347105  176698 logs.go:123] Gathering logs for kube-scheduler [0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1] ...
	I1202 20:47:36.347146  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:36.389059  176698 logs.go:123] Gathering logs for kube-proxy [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e] ...
	I1202 20:47:36.389098  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:36.442604  176698 logs.go:123] Gathering logs for kube-controller-manager [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655] ...
	I1202 20:47:36.442638  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:36.494136  176698 logs.go:123] Gathering logs for storage-provisioner [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395] ...
	I1202 20:47:36.494164  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:34.689208  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:34.689627  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:34.689673  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:34.689897  180134 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1202 20:47:34.694931  180134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:47:34.710536  180134 kubeadm.go:884] updating cluster {Name:kindnet-019279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:kindnet-019279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.83.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:47:34.710702  180134 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:47:34.710756  180134 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:34.742132  180134 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1202 20:47:34.742237  180134 ssh_runner.go:195] Run: which lz4
	I1202 20:47:34.746981  180134 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 20:47:34.752139  180134 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 20:47:34.752177  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1202 20:47:36.124481  180134 crio.go:462] duration metric: took 1.377530742s to copy over tarball
	I1202 20:47:36.124586  180134 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 20:47:37.902136  180134 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.777508517s)
	I1202 20:47:37.902187  180134 crio.go:469] duration metric: took 1.777659621s to extract the tarball
	I1202 20:47:37.902197  180134 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 20:47:37.944478  180134 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:38.002940  180134 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:47:38.002964  180134 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:47:38.002971  180134 kubeadm.go:935] updating node { 192.168.83.176 8443 v1.34.2 crio true true} ...
	I1202 20:47:38.003064  180134 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-019279 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-019279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1202 20:47:38.003133  180134 ssh_runner.go:195] Run: crio config
	I1202 20:47:38.074332  180134 cni.go:84] Creating CNI manager for "kindnet"
	I1202 20:47:38.074374  180134 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:47:38.074410  180134 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.176 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-019279 NodeName:kindnet-019279 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:47:38.074567  180134 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-019279"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:47:38.074665  180134 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:47:38.089202  180134 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:47:38.089274  180134 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:47:38.103041  180134 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1202 20:47:38.131005  180134 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:47:38.156015  180134 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1202 20:47:38.182702  180134 ssh_runner.go:195] Run: grep 192.168.83.176	control-plane.minikube.internal$ /etc/hosts
	I1202 20:47:38.187283  180134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:47:38.208995  180134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:38.370415  180134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:47:38.392535  180134 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279 for IP: 192.168.83.176
	I1202 20:47:38.392573  180134 certs.go:195] generating shared ca certs ...
	I1202 20:47:38.392600  180134 certs.go:227] acquiring lock for ca certs: {Name:mk4d0a32f0604330372f61cbe35af2ea6f3b6c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.392841  180134 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key
	I1202 20:47:38.392923  180134 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key
	I1202 20:47:38.392940  180134 certs.go:257] generating profile certs ...
	I1202 20:47:38.393027  180134 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.key
	I1202 20:47:38.393047  180134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt with IP's: []
	I1202 20:47:38.122006  179993 addons.go:530] duration metric: took 1.74789337s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:47:38.127703  179993 system_pods.go:59] 8 kube-system pods found
	I1202 20:47:38.127768  179993 system_pods.go:61] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.127789  179993 system_pods.go:61] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.127802  179993 system_pods.go:61] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:38.127826  179993 system_pods.go:61] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:38.127843  179993 system_pods.go:61] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:38.127857  179993 system_pods.go:61] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:47:38.127869  179993 system_pods.go:61] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:38.127876  179993 system_pods.go:61] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Pending
	I1202 20:47:38.127889  179993 system_pods.go:74] duration metric: took 8.737313ms to wait for pod list to return data ...
	I1202 20:47:38.127903  179993 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:47:38.135830  179993 default_sa.go:45] found service account: "default"
	I1202 20:47:38.135860  179993 default_sa.go:55] duration metric: took 7.947755ms for default service account to be created ...
	I1202 20:47:38.135873  179993 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:47:38.146214  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:38.146253  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.146263  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.146273  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:38.146283  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:38.146295  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:38.146308  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:47:38.146319  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:38.146328  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Pending
	I1202 20:47:38.146365  179993 retry.go:31] will retry after 265.55355ms: missing components: kube-dns, kube-proxy
	I1202 20:47:38.424516  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:38.424559  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.424570  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.424675  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:38.424694  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:38.424707  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:38.424717  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Running
	I1202 20:47:38.424726  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:38.424734  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:47:38.424755  179993 retry.go:31] will retry after 269.333893ms: missing components: kube-dns
	I1202 20:47:38.699540  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:38.699600  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.699613  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.699623  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:38.699634  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:38.699648  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:38.699677  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Running
	I1202 20:47:38.699692  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:38.699699  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:47:38.699722  179993 retry.go:31] will retry after 479.698489ms: missing components: kube-dns
	I1202 20:47:39.210986  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:39.211050  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:39.211064  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:39.211076  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:39.211088  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:39.211102  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:39.211113  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Running
	I1202 20:47:39.211128  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:39.211136  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:47:39.211160  179993 retry.go:31] will retry after 380.187566ms: missing components: kube-dns
	I1202 20:47:39.597001  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:39.597040  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:39.597049  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:39.597058  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:39.597064  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:39.597073  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:39.597078  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Running
	I1202 20:47:39.597088  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:39.597095  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:47:39.597106  179993 system_pods.go:126] duration metric: took 1.461225378s to wait for k8s-apps to be running ...
	I1202 20:47:39.597121  179993 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:47:39.597174  179993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:47:39.621315  179993 system_svc.go:56] duration metric: took 24.179703ms WaitForService to wait for kubelet
	I1202 20:47:39.621355  179993 kubeadm.go:587] duration metric: took 3.24731856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:47:39.621380  179993 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:47:39.626571  179993 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 20:47:39.626607  179993 node_conditions.go:123] node cpu capacity is 2
	I1202 20:47:39.626627  179993 node_conditions.go:105] duration metric: took 5.239878ms to run NodePressure ...
	I1202 20:47:39.626643  179993 start.go:242] waiting for startup goroutines ...
	I1202 20:47:39.626675  179993 start.go:247] waiting for cluster config update ...
	I1202 20:47:39.626693  179993 start.go:256] writing updated cluster config ...
	I1202 20:47:39.636799  179993 ssh_runner.go:195] Run: rm -f paused
	I1202 20:47:39.645606  179993 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:47:39.651753  179993 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-82fs6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:47:39.214350  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:47:39.214378  180709 machine.go:97] duration metric: took 6.283447535s to provisionDockerMachine
	I1202 20:47:39.214393  180709 start.go:293] postStartSetup for "pause-892862" (driver="kvm2")
	I1202 20:47:39.214406  180709 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:47:39.214474  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:47:39.219158  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.219732  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.219770  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.220034  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:39.314156  180709 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:47:39.320511  180709 info.go:137] Remote host: Buildroot 2025.02
	I1202 20:47:39.320551  180709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/addons for local assets ...
	I1202 20:47:39.320667  180709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/files for local assets ...
	I1202 20:47:39.320779  180709 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem -> 1470702.pem in /etc/ssl/certs
	I1202 20:47:39.320906  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:47:39.340926  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:39.382551  180709 start.go:296] duration metric: took 168.137636ms for postStartSetup
	I1202 20:47:39.382618  180709 fix.go:56] duration metric: took 6.456440939s for fixHost
	I1202 20:47:39.386893  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.387430  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.387478  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.387794  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:39.388131  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:39.388152  180709 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1202 20:47:39.503084  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764708459.496621228
	
	I1202 20:47:39.503107  180709 fix.go:216] guest clock: 1764708459.496621228
	I1202 20:47:39.503116  180709 fix.go:229] Guest: 2025-12-02 20:47:39.496621228 +0000 UTC Remote: 2025-12-02 20:47:39.382625482 +0000 UTC m=+9.271396085 (delta=113.995746ms)
	I1202 20:47:39.503140  180709 fix.go:200] guest clock delta is within tolerance: 113.995746ms
	I1202 20:47:39.503147  180709 start.go:83] releasing machines lock for "pause-892862", held for 6.576997859s
	I1202 20:47:39.506571  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.507124  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.507156  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.507824  180709 ssh_runner.go:195] Run: cat /version.json
	I1202 20:47:39.507913  180709 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:47:39.511523  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.511852  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.512084  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.512119  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.512311  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:39.512328  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.512358  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.512566  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:39.599611  180709 ssh_runner.go:195] Run: systemctl --version
	I1202 20:47:39.639739  180709 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:47:39.801939  180709 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:47:39.813366  180709 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:47:39.813453  180709 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:47:39.825610  180709 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:47:39.825642  180709 start.go:496] detecting cgroup driver to use...
	I1202 20:47:39.825772  180709 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:47:39.851955  180709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:47:39.871192  180709 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:47:39.871265  180709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:47:39.893578  180709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:47:39.915897  180709 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:47:40.157168  180709 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:47:40.351772  180709 docker.go:234] disabling docker service ...
	I1202 20:47:40.351857  180709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:47:40.382162  180709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:47:40.400292  180709 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:47:40.619600  180709 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:47:40.818294  180709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:47:40.836375  180709 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:47:40.862872  180709 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:47:40.862953  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.876930  180709 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:47:40.877005  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.892088  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.905117  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.917965  180709 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:47:40.932792  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.945233  180709 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.959143  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.971613  180709 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:47:40.982500  180709 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:47:40.994339  180709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:41.169910  180709 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:47:41.484137  180709 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:47:41.484220  180709 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:47:41.489514  180709 start.go:564] Will wait 60s for crictl version
	I1202 20:47:41.489573  180709 ssh_runner.go:195] Run: which crictl
	I1202 20:47:41.493586  180709 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 20:47:41.525318  180709 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 20:47:41.525408  180709 ssh_runner.go:195] Run: crio --version
	I1202 20:47:41.556371  180709 ssh_runner.go:195] Run: crio --version
	I1202 20:47:41.587171  180709 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1202 20:47:39.039828  176698 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I1202 20:47:39.040546  176698 api_server.go:269] stopped: https://192.168.72.13:8443/healthz: Get "https://192.168.72.13:8443/healthz": dial tcp 192.168.72.13:8443: connect: connection refused
	I1202 20:47:39.040617  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:47:39.040709  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:47:39.093195  176698 cri.go:89] found id: "2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:39.093222  176698 cri.go:89] found id: ""
	I1202 20:47:39.093234  176698 logs.go:282] 1 containers: [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf]
	I1202 20:47:39.093303  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.100565  176698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:47:39.100681  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:47:39.154481  176698 cri.go:89] found id: "5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:39.154512  176698 cri.go:89] found id: ""
	I1202 20:47:39.154522  176698 logs.go:282] 1 containers: [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590]
	I1202 20:47:39.154590  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.159685  176698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:47:39.159776  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:47:39.206790  176698 cri.go:89] found id: "eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:39.206824  176698 cri.go:89] found id: "130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:39.206831  176698 cri.go:89] found id: ""
	I1202 20:47:39.206843  176698 logs.go:282] 2 containers: [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0]
	I1202 20:47:39.206939  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.212118  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.218570  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:47:39.218642  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:47:39.272598  176698 cri.go:89] found id: "c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:39.272627  176698 cri.go:89] found id: "0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:39.272633  176698 cri.go:89] found id: ""
	I1202 20:47:39.272645  176698 logs.go:282] 2 containers: [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1]
	I1202 20:47:39.272746  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.277891  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.283773  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:47:39.283875  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:47:39.337937  176698 cri.go:89] found id: "5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:39.337971  176698 cri.go:89] found id: ""
	I1202 20:47:39.337983  176698 logs.go:282] 1 containers: [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e]
	I1202 20:47:39.338054  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.343828  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:47:39.343905  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:47:39.393256  176698 cri.go:89] found id: "44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:39.393274  176698 cri.go:89] found id: ""
	I1202 20:47:39.393285  176698 logs.go:282] 1 containers: [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655]
	I1202 20:47:39.393350  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.399324  176698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:47:39.399410  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:47:39.442159  176698 cri.go:89] found id: ""
	I1202 20:47:39.442196  176698 logs.go:282] 0 containers: []
	W1202 20:47:39.442211  176698 logs.go:284] No container was found matching "kindnet"
	I1202 20:47:39.442219  176698 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:47:39.442292  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:47:39.482048  176698 cri.go:89] found id: "2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:39.482077  176698 cri.go:89] found id: ""
	I1202 20:47:39.482089  176698 logs.go:282] 1 containers: [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395]
	I1202 20:47:39.482146  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.486424  176698 logs.go:123] Gathering logs for kube-apiserver [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf] ...
	I1202 20:47:39.486447  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:39.538158  176698 logs.go:123] Gathering logs for coredns [130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0] ...
	I1202 20:47:39.538200  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:39.577057  176698 logs.go:123] Gathering logs for kube-scheduler [0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1] ...
	I1202 20:47:39.577102  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:39.624196  176698 logs.go:123] Gathering logs for kube-proxy [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e] ...
	I1202 20:47:39.624247  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:39.673984  176698 logs.go:123] Gathering logs for kube-controller-manager [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655] ...
	I1202 20:47:39.674019  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:39.718332  176698 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:47:39.718366  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:47:40.184219  176698 logs.go:123] Gathering logs for kubelet ...
	I1202 20:47:40.184284  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:47:40.292863  176698 logs.go:123] Gathering logs for dmesg ...
	I1202 20:47:40.292913  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:47:40.312603  176698 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:47:40.312667  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:47:40.399412  176698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:47:40.399439  176698 logs.go:123] Gathering logs for etcd [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590] ...
	I1202 20:47:40.399456  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:40.451748  176698 logs.go:123] Gathering logs for coredns [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4] ...
	I1202 20:47:40.451787  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:40.508674  176698 logs.go:123] Gathering logs for kube-scheduler [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400] ...
	I1202 20:47:40.508721  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:40.583006  176698 logs.go:123] Gathering logs for storage-provisioner [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395] ...
	I1202 20:47:40.583053  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:40.623067  176698 logs.go:123] Gathering logs for container status ...
	I1202 20:47:40.623104  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:47:43.179032  176698 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I1202 20:47:43.179761  176698 api_server.go:269] stopped: https://192.168.72.13:8443/healthz: Get "https://192.168.72.13:8443/healthz": dial tcp 192.168.72.13:8443: connect: connection refused
	I1202 20:47:43.179828  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:47:43.179908  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:47:43.230743  176698 cri.go:89] found id: "2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:43.230772  176698 cri.go:89] found id: ""
	I1202 20:47:43.230783  176698 logs.go:282] 1 containers: [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf]
	I1202 20:47:43.230859  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.237813  176698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:47:43.237921  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:47:43.293974  176698 cri.go:89] found id: "5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:43.294004  176698 cri.go:89] found id: ""
	I1202 20:47:43.294016  176698 logs.go:282] 1 containers: [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590]
	I1202 20:47:43.294090  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.299156  176698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:47:43.299239  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:47:43.342665  176698 cri.go:89] found id: "eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:43.342696  176698 cri.go:89] found id: "130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:43.342702  176698 cri.go:89] found id: ""
	I1202 20:47:43.342713  176698 logs.go:282] 2 containers: [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0]
	I1202 20:47:43.342779  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.347450  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.353853  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:47:43.353929  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:47:43.401373  176698 cri.go:89] found id: "c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:43.401399  176698 cri.go:89] found id: "0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:43.401404  176698 cri.go:89] found id: ""
	I1202 20:47:43.401413  176698 logs.go:282] 2 containers: [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1]
	I1202 20:47:43.401492  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.407225  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.413187  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:47:43.413286  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:47:43.462833  176698 cri.go:89] found id: "5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:43.462863  176698 cri.go:89] found id: ""
	I1202 20:47:43.462875  176698 logs.go:282] 1 containers: [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e]
	I1202 20:47:43.462973  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.467815  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:47:43.467877  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:47:38.517609  180134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt ...
	I1202 20:47:38.517641  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: {Name:mkc7f205ec973991f73503e30764038e4ada8e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.517873  180134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.key ...
	I1202 20:47:38.517910  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.key: {Name:mk2c0c42c8e2faf6c33f82f7062fcea7c70eb537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.518428  180134 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key.d8483bf8
	I1202 20:47:38.518446  180134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt.d8483bf8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.176]
	I1202 20:47:38.630770  180134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt.d8483bf8 ...
	I1202 20:47:38.630814  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt.d8483bf8: {Name:mkf045233bd573a7e62274ff983643c2ac949c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.631006  180134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key.d8483bf8 ...
	I1202 20:47:38.631022  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key.d8483bf8: {Name:mk4f77b6cab7d226418b939b0450fa455bbf0e92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.631119  180134 certs.go:382] copying /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt.d8483bf8 -> /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt
	I1202 20:47:38.631196  180134 certs.go:386] copying /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key.d8483bf8 -> /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key
	I1202 20:47:38.631257  180134 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.key
	I1202 20:47:38.631275  180134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.crt with IP's: []
	I1202 20:47:38.752106  180134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.crt ...
	I1202 20:47:38.752154  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.crt: {Name:mkc1bd0a8f67665c6e8bb74f5995e7b732daf6da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.752398  180134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.key ...
	I1202 20:47:38.752425  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.key: {Name:mk66e494b67bdc506da1a63544b545c9295a12bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.752742  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem (1338 bytes)
	W1202 20:47:38.752803  180134 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070_empty.pem, impossibly tiny 0 bytes
	I1202 20:47:38.752818  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:47:38.752855  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:47:38.752896  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:47:38.752935  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem (1675 bytes)
	I1202 20:47:38.753008  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:38.753633  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:47:38.792862  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:47:38.832680  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:47:38.868513  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:47:38.904790  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 20:47:38.941194  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:47:38.970544  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:47:39.004988  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:47:39.036607  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /usr/share/ca-certificates/1470702.pem (1708 bytes)
	I1202 20:47:39.071483  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:47:39.129826  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem --> /usr/share/ca-certificates/147070.pem (1338 bytes)
	I1202 20:47:39.185426  180134 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:47:39.217090  180134 ssh_runner.go:195] Run: openssl version
	I1202 20:47:39.225465  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470702.pem && ln -fs /usr/share/ca-certificates/1470702.pem /etc/ssl/certs/1470702.pem"
	I1202 20:47:39.243652  180134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470702.pem
	I1202 20:47:39.250475  180134 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:57 /usr/share/ca-certificates/1470702.pem
	I1202 20:47:39.250548  180134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470702.pem
	I1202 20:47:39.261227  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1470702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:47:39.279022  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:47:39.299221  180134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:39.306634  180134 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:45 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:39.306750  180134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:39.315199  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:47:39.332778  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147070.pem && ln -fs /usr/share/ca-certificates/147070.pem /etc/ssl/certs/147070.pem"
	I1202 20:47:39.351082  180134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147070.pem
	I1202 20:47:39.358595  180134 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:57 /usr/share/ca-certificates/147070.pem
	I1202 20:47:39.358705  180134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147070.pem
	I1202 20:47:39.369623  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/147070.pem /etc/ssl/certs/51391683.0"
	I1202 20:47:39.386252  180134 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:47:39.393143  180134 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:47:39.393207  180134 kubeadm.go:401] StartCluster: {Name:kindnet-019279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:kindnet-019279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.83.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:39.393302  180134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:47:39.393356  180134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:47:39.432963  180134 cri.go:89] found id: ""
	I1202 20:47:39.433056  180134 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:47:39.448092  180134 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:47:39.461479  180134 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:47:39.474230  180134 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:47:39.474259  180134 kubeadm.go:158] found existing configuration files:
	
	I1202 20:47:39.474333  180134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:47:39.488163  180134 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:47:39.488236  180134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:47:39.500858  180134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:47:39.515643  180134 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:47:39.515722  180134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:47:39.531894  180134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:47:39.545190  180134 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:47:39.545262  180134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:47:39.558587  180134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:47:39.569550  180134 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:47:39.569622  180134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:47:39.581810  180134 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 20:47:39.786572  180134 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1202 20:47:41.657830  179993 pod_ready.go:104] pod "coredns-66bc5c9577-82fs6" is not "Ready", error: <nil>
	I1202 20:47:42.161919  179993 pod_ready.go:94] pod "coredns-66bc5c9577-82fs6" is "Ready"
	I1202 20:47:42.161954  179993 pod_ready.go:86] duration metric: took 2.510175032s for pod "coredns-66bc5c9577-82fs6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:47:42.161967  179993 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-88m47" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:47:44.171831  179993 pod_ready.go:104] pod "coredns-66bc5c9577-88m47" is not "Ready", error: <nil>
	I1202 20:47:41.591217  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:41.591703  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:41.591730  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:41.591928  180709 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 20:47:41.596712  180709 kubeadm.go:884] updating cluster {Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:47:41.596857  180709 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:47:41.596919  180709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:41.640327  180709 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:47:41.640362  180709 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:47:41.640430  180709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:41.679399  180709 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:47:41.679421  180709 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:47:41.679428  180709 kubeadm.go:935] updating node { 192.168.39.176 8443 v1.34.2 crio true true} ...
	I1202 20:47:41.679522  180709 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-892862 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:47:41.679586  180709 ssh_runner.go:195] Run: crio config
	I1202 20:47:41.728823  180709 cni.go:84] Creating CNI manager for ""
	I1202 20:47:41.728896  180709 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:47:41.728935  180709 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:47:41.728988  180709 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-892862 NodeName:pause-892862 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:47:41.729271  180709 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-892862"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:47:41.729355  180709 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:47:41.744415  180709 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:47:41.744505  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:47:41.758529  180709 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1202 20:47:41.787792  180709 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:47:41.811286  180709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1202 20:47:41.832600  180709 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I1202 20:47:41.836814  180709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:42.006895  180709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:47:42.027123  180709 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862 for IP: 192.168.39.176
	I1202 20:47:42.027153  180709 certs.go:195] generating shared ca certs ...
	I1202 20:47:42.027177  180709 certs.go:227] acquiring lock for ca certs: {Name:mk4d0a32f0604330372f61cbe35af2ea6f3b6c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:42.027375  180709 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key
	I1202 20:47:42.027422  180709 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key
	I1202 20:47:42.027429  180709 certs.go:257] generating profile certs ...
	I1202 20:47:42.027518  180709 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/client.key
	I1202 20:47:42.027573  180709 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/apiserver.key.c6c045af
	I1202 20:47:42.027608  180709 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/proxy-client.key
	I1202 20:47:42.027757  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem (1338 bytes)
	W1202 20:47:42.027788  180709 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070_empty.pem, impossibly tiny 0 bytes
	I1202 20:47:42.027794  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:47:42.027818  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:47:42.027840  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:47:42.027867  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem (1675 bytes)
	I1202 20:47:42.027933  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:42.028560  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:47:42.065172  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:47:42.098807  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:47:42.133019  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:47:42.169561  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 20:47:42.204528  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:47:42.246305  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:47:42.361413  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:47:42.430775  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:47:42.493156  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem --> /usr/share/ca-certificates/147070.pem (1338 bytes)
	I1202 20:47:42.571562  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /usr/share/ca-certificates/1470702.pem (1708 bytes)
	I1202 20:47:42.666413  180709 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:47:42.756478  180709 ssh_runner.go:195] Run: openssl version
	I1202 20:47:42.770080  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:47:42.796346  180709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:42.806695  180709 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:45 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:42.806795  180709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:42.822001  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:47:42.850457  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147070.pem && ln -fs /usr/share/ca-certificates/147070.pem /etc/ssl/certs/147070.pem"
	I1202 20:47:42.874641  180709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147070.pem
	I1202 20:47:42.890747  180709 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:57 /usr/share/ca-certificates/147070.pem
	I1202 20:47:42.890825  180709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147070.pem
	I1202 20:47:42.904269  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/147070.pem /etc/ssl/certs/51391683.0"
	I1202 20:47:42.931175  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470702.pem && ln -fs /usr/share/ca-certificates/1470702.pem /etc/ssl/certs/1470702.pem"
	I1202 20:47:42.979186  180709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470702.pem
	I1202 20:47:42.999319  180709 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:57 /usr/share/ca-certificates/1470702.pem
	I1202 20:47:42.999403  180709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470702.pem
	I1202 20:47:43.013677  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1470702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:47:43.040096  180709 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:47:43.057377  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:47:43.073530  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:47:43.088015  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:47:43.103880  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:47:43.115613  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:47:43.126096  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:47:43.137582  180709 kubeadm.go:401] StartCluster: {Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:43.137716  180709 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:47:43.137772  180709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:47:43.241476  180709 cri.go:89] found id: "5ae303b8cd9929f78e5e243a00749433f3a28f47ff958be9b1bd42b8690a0a4f"
	I1202 20:47:43.241501  180709 cri.go:89] found id: "5ab2b69576bd7ddc7b9834385c7475de654c082386d224fb389bea4e68f3c384"
	I1202 20:47:43.241506  180709 cri.go:89] found id: "063bdc2f4044d47e558e971c8d8742aec22bec89182a48c14bc4dc181c60a531"
	I1202 20:47:43.241510  180709 cri.go:89] found id: "9bbfedda04a70bdbc59f66ca20322b7bf1717ad77a590bbc7c2ce4242714ec5c"
	I1202 20:47:43.241514  180709 cri.go:89] found id: "c3ba4033625655ee75b7cdd32c8895e62e5f26321e371238b33d804ab1138926"
	I1202 20:47:43.241518  180709 cri.go:89] found id: "4eb3b7ec4b7d853bf9eb9a01676c24007457097a629f779a01fc49110e7cc47d"
	I1202 20:47:43.241523  180709 cri.go:89] found id: "7a076c19ae69f444d8beaca6206d51a7ea8266bb0ac74b038fb2531b733b0ed1"
	I1202 20:47:43.241527  180709 cri.go:89] found id: "bdb1b64ca24e08df0dda142abb2f57874f9cda21c9400ad109b3980d49353290"
	I1202 20:47:43.241531  180709 cri.go:89] found id: ""
	I1202 20:47:43.241581  180709 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-892862 -n pause-892862
helpers_test.go:269: (dbg) Run:  kubectl --context pause-892862 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-892862 -n pause-892862
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-892862 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-892862 logs -n 25: (1.404880269s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p stopped-upgrade-225043 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                          │ stopped-upgrade-225043    │ jenkins │ v1.35.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:45 UTC │
	│ start   │ -p cert-expiration-095611 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                     │ cert-expiration-095611    │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:46 UTC │
	│ start   │ -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                             │ kubernetes-upgrade-950537 │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │                     │
	│ start   │ -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio               │ kubernetes-upgrade-950537 │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:46 UTC │
	│ stop    │ stopped-upgrade-225043 stop                                                                                                                                 │ stopped-upgrade-225043    │ jenkins │ v1.35.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:45 UTC │
	│ start   │ -p stopped-upgrade-225043 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                      │ stopped-upgrade-225043    │ jenkins │ v1.37.0 │ 02 Dec 25 20:45 UTC │ 02 Dec 25 20:46 UTC │
	│ delete  │ -p cert-expiration-095611                                                                                                                                   │ cert-expiration-095611    │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:46 UTC │
	│ start   │ -p guest-856307 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                     │ guest-856307              │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:46 UTC │
	│ delete  │ -p kubernetes-upgrade-950537                                                                                                                                │ kubernetes-upgrade-950537 │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:46 UTC │
	│ start   │ -p pause-892862 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                     │ pause-892862              │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:47 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-225043 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker │ stopped-upgrade-225043    │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │                     │
	│ delete  │ -p stopped-upgrade-225043                                                                                                                                   │ stopped-upgrade-225043    │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:46 UTC │
	│ start   │ -p auto-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                       │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │ 02 Dec 25 20:47 UTC │
	│ start   │ -p kindnet-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                      │ kindnet-019279            │ jenkins │ v1.37.0 │ 02 Dec 25 20:46 UTC │                     │
	│ start   │ -p pause-892862 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                              │ pause-892862              │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 pgrep -a kubelet                                                                                                                             │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:47 UTC │ 02 Dec 25 20:47 UTC │
	│ ssh     │ -p auto-019279 sudo cat /etc/nsswitch.conf                                                                                                                  │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 sudo cat /etc/resolv.conf                                                                                                                    │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 sudo crictl pods                                                                                                                             │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 sudo crictl ps --all                                                                                                                         │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                  │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 sudo ip a s                                                                                                                                  │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 sudo ip r s                                                                                                                                  │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 sudo iptables-save                                                                                                                           │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │ 02 Dec 25 20:48 UTC │
	│ ssh     │ -p auto-019279 sudo iptables -t nat -L -n -v                                                                                                                │ auto-019279               │ jenkins │ v1.37.0 │ 02 Dec 25 20:48 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 20:47:30
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 20:47:30.163624  180709 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:47:30.163949  180709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:30.163966  180709 out.go:374] Setting ErrFile to fd 2...
	I1202 20:47:30.163973  180709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:47:30.164238  180709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:47:30.164729  180709 out.go:368] Setting JSON to false
	I1202 20:47:30.165625  180709 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8994,"bootTime":1764699456,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:47:30.165711  180709 start.go:143] virtualization: kvm guest
	I1202 20:47:30.167575  180709 out.go:179] * [pause-892862] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:47:30.168709  180709 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:47:30.168733  180709 notify.go:221] Checking for updates...
	I1202 20:47:30.171934  180709 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:47:30.173149  180709 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 20:47:30.174456  180709 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 20:47:30.176184  180709 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:47:30.177782  180709 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:47:30.179496  180709 config.go:182] Loaded profile config "pause-892862": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:30.180059  180709 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:47:30.219965  180709 out.go:179] * Using the kvm2 driver based on existing profile
	I1202 20:47:30.221042  180709 start.go:309] selected driver: kvm2
	I1202 20:47:30.221059  180709 start.go:927] validating driver "kvm2" against &{Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:30.221242  180709 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:47:30.222704  180709 cni.go:84] Creating CNI manager for ""
	I1202 20:47:30.222789  180709 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:47:30.222852  180709 start.go:353] cluster config:
	{Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-892862 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:30.223016  180709 iso.go:125] acquiring lock: {Name:mkfe4a75ba73b1e7a1c7cd55dc23a305917e17a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 20:47:30.226983  180709 out.go:179] * Starting "pause-892862" primary control-plane node in "pause-892862" cluster
	I1202 20:47:31.094777  179993 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503160671s
	I1202 20:47:31.116228  179993 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1202 20:47:31.131202  179993 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1202 20:47:31.149434  179993 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1202 20:47:31.149709  179993 kubeadm.go:319] [mark-control-plane] Marking the node auto-019279 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1202 20:47:31.161563  179993 kubeadm.go:319] [bootstrap-token] Using token: 78nlm6.i9hh1cmbbamz8gh4
	I1202 20:47:31.163631  179993 out.go:252]   - Configuring RBAC rules ...
	I1202 20:47:31.163835  179993 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1202 20:47:31.171768  179993 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1202 20:47:31.179155  179993 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1202 20:47:31.182233  179993 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1202 20:47:31.186717  179993 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1202 20:47:31.191222  179993 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1202 20:47:31.503295  179993 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1202 20:47:31.969560  179993 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1202 20:47:32.502187  179993 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1202 20:47:32.503125  179993 kubeadm.go:319] 
	I1202 20:47:32.503309  179993 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1202 20:47:32.503333  179993 kubeadm.go:319] 
	I1202 20:47:32.503458  179993 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1202 20:47:32.503482  179993 kubeadm.go:319] 
	I1202 20:47:32.503517  179993 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1202 20:47:32.503601  179993 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1202 20:47:32.503694  179993 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1202 20:47:32.503705  179993 kubeadm.go:319] 
	I1202 20:47:32.503788  179993 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1202 20:47:32.503808  179993 kubeadm.go:319] 
	I1202 20:47:32.503866  179993 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1202 20:47:32.503875  179993 kubeadm.go:319] 
	I1202 20:47:32.503950  179993 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1202 20:47:32.504067  179993 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1202 20:47:32.504171  179993 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1202 20:47:32.504183  179993 kubeadm.go:319] 
	I1202 20:47:32.504286  179993 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1202 20:47:32.504389  179993 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1202 20:47:32.504412  179993 kubeadm.go:319] 
	I1202 20:47:32.504526  179993 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 78nlm6.i9hh1cmbbamz8gh4 \
	I1202 20:47:32.504703  179993 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:164b9536bcfe41c4174c32548d219b78812180977735903d1dc928867094e350 \
	I1202 20:47:32.504742  179993 kubeadm.go:319] 	--control-plane 
	I1202 20:47:32.504755  179993 kubeadm.go:319] 
	I1202 20:47:32.504879  179993 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1202 20:47:32.504890  179993 kubeadm.go:319] 
	I1202 20:47:32.504983  179993 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 78nlm6.i9hh1cmbbamz8gh4 \
	I1202 20:47:32.505161  179993 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:164b9536bcfe41c4174c32548d219b78812180977735903d1dc928867094e350 
	I1202 20:47:32.506350  179993 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1202 20:47:32.506377  179993 cni.go:84] Creating CNI manager for ""
	I1202 20:47:32.506388  179993 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:47:32.508249  179993 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1202 20:47:30.228778  180709 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:47:30.228833  180709 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 20:47:30.228846  180709 cache.go:65] Caching tarball of preloaded images
	I1202 20:47:30.228948  180709 preload.go:238] Found /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1202 20:47:30.228959  180709 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1202 20:47:30.229115  180709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/config.json ...
	I1202 20:47:30.229349  180709 start.go:360] acquireMachinesLock for pause-892862: {Name:mk87259b3368832a6a6ed41448f2ab0149793b9b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1202 20:47:32.926107  180709 start.go:364] duration metric: took 2.696699728s to acquireMachinesLock for "pause-892862"
	I1202 20:47:32.926168  180709 start.go:96] Skipping create...Using existing machine configuration
	I1202 20:47:32.926189  180709 fix.go:54] fixHost starting: 
	I1202 20:47:32.928804  180709 fix.go:112] recreateIfNeeded on pause-892862: state=Running err=<nil>
	W1202 20:47:32.928835  180709 fix.go:138] unexpected machine state, will restart: <nil>
	I1202 20:47:30.742110  176698 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I1202 20:47:30.743070  176698 api_server.go:269] stopped: https://192.168.72.13:8443/healthz: Get "https://192.168.72.13:8443/healthz": dial tcp 192.168.72.13:8443: connect: connection refused
	I1202 20:47:30.743141  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:47:30.743193  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:47:30.785582  176698 cri.go:89] found id: "2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:30.785613  176698 cri.go:89] found id: ""
	I1202 20:47:30.785627  176698 logs.go:282] 1 containers: [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf]
	I1202 20:47:30.785722  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.790405  176698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:47:30.790476  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:47:30.833179  176698 cri.go:89] found id: "5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:30.833205  176698 cri.go:89] found id: ""
	I1202 20:47:30.833216  176698 logs.go:282] 1 containers: [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590]
	I1202 20:47:30.833282  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.837789  176698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:47:30.837876  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:47:30.879081  176698 cri.go:89] found id: "eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:30.879117  176698 cri.go:89] found id: "130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:30.879124  176698 cri.go:89] found id: ""
	I1202 20:47:30.879135  176698 logs.go:282] 2 containers: [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0]
	I1202 20:47:30.879218  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.884605  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.889809  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:47:30.889895  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:47:30.933181  176698 cri.go:89] found id: "c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:30.933207  176698 cri.go:89] found id: "0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:30.933212  176698 cri.go:89] found id: ""
	I1202 20:47:30.933220  176698 logs.go:282] 2 containers: [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1]
	I1202 20:47:30.933276  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.937670  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.942933  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:47:30.943017  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:47:30.987424  176698 cri.go:89] found id: "5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:30.987454  176698 cri.go:89] found id: ""
	I1202 20:47:30.987464  176698 logs.go:282] 1 containers: [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e]
	I1202 20:47:30.987534  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:30.991841  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:47:30.991921  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:47:31.027277  176698 cri.go:89] found id: "44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:31.027309  176698 cri.go:89] found id: ""
	I1202 20:47:31.027321  176698 logs.go:282] 1 containers: [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655]
	I1202 20:47:31.027393  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:31.032207  176698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:47:31.032295  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:47:31.075508  176698 cri.go:89] found id: ""
	I1202 20:47:31.075540  176698 logs.go:282] 0 containers: []
	W1202 20:47:31.075552  176698 logs.go:284] No container was found matching "kindnet"
	I1202 20:47:31.075560  176698 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:47:31.075636  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:47:31.115010  176698 cri.go:89] found id: "2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:31.115038  176698 cri.go:89] found id: ""
	I1202 20:47:31.115050  176698 logs.go:282] 1 containers: [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395]
	I1202 20:47:31.115119  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:31.120648  176698 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:47:31.120697  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:47:31.193602  176698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:47:31.193627  176698 logs.go:123] Gathering logs for kube-apiserver [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf] ...
	I1202 20:47:31.193651  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:31.235972  176698 logs.go:123] Gathering logs for etcd [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590] ...
	I1202 20:47:31.236012  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:31.286758  176698 logs.go:123] Gathering logs for coredns [130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0] ...
	I1202 20:47:31.286800  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:31.327870  176698 logs.go:123] Gathering logs for kube-scheduler [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400] ...
	I1202 20:47:31.327900  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:31.412866  176698 logs.go:123] Gathering logs for kube-scheduler [0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1] ...
	I1202 20:47:31.412911  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:31.451014  176698 logs.go:123] Gathering logs for kube-controller-manager [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655] ...
	I1202 20:47:31.451067  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:31.492259  176698 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:47:31.492290  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:47:31.963139  176698 logs.go:123] Gathering logs for kubelet ...
	I1202 20:47:31.963180  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:47:32.070242  176698 logs.go:123] Gathering logs for dmesg ...
	I1202 20:47:32.070323  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:47:32.090339  176698 logs.go:123] Gathering logs for coredns [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4] ...
	I1202 20:47:32.090374  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:32.141380  176698 logs.go:123] Gathering logs for kube-proxy [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e] ...
	I1202 20:47:32.141425  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:32.183760  176698 logs.go:123] Gathering logs for storage-provisioner [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395] ...
	I1202 20:47:32.183795  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:32.226518  176698 logs.go:123] Gathering logs for container status ...
	I1202 20:47:32.226548  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:47:31.240112  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.240928  180134 main.go:143] libmachine: domain kindnet-019279 has current primary IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.240956  180134 main.go:143] libmachine: found domain IP: 192.168.83.176
	I1202 20:47:31.240967  180134 main.go:143] libmachine: reserving static IP address...
	I1202 20:47:31.241570  180134 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kindnet-019279", mac: "52:54:00:48:bb:87", ip: "192.168.83.176"} in network mk-kindnet-019279
	I1202 20:47:31.511193  180134 main.go:143] libmachine: reserved static IP address 192.168.83.176 for domain kindnet-019279
	I1202 20:47:31.511219  180134 main.go:143] libmachine: waiting for SSH...
	I1202 20:47:31.511228  180134 main.go:143] libmachine: Getting to WaitForSSH function...
	I1202 20:47:31.515342  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.516101  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:31.516146  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.516381  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:31.516753  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:31.516772  180134 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1202 20:47:31.655881  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:47:31.656310  180134 main.go:143] libmachine: domain creation complete
	I1202 20:47:31.658450  180134 machine.go:94] provisionDockerMachine start ...
	I1202 20:47:31.661350  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.661904  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:31.661942  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.662166  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:31.662389  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:31.662402  180134 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:47:31.782896  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1202 20:47:31.782929  180134 buildroot.go:166] provisioning hostname "kindnet-019279"
	I1202 20:47:31.787013  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.787537  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:31.787563  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.787798  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:31.788097  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:31.788113  180134 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-019279 && echo "kindnet-019279" | sudo tee /etc/hostname
	I1202 20:47:31.931616  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-019279
	
	I1202 20:47:31.935061  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.935613  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:31.935652  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:31.935868  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:31.936164  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:31.936187  180134 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-019279' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-019279/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-019279' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:47:32.069533  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:47:32.069574  180134 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-143119/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-143119/.minikube}
	I1202 20:47:32.069607  180134 buildroot.go:174] setting up certificates
	I1202 20:47:32.069619  180134 provision.go:84] configureAuth start
	I1202 20:47:32.073872  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.074454  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.074506  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.077822  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.078380  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.078443  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.078626  180134 provision.go:143] copyHostCerts
	I1202 20:47:32.078700  180134 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem, removing ...
	I1202 20:47:32.078716  180134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem
	I1202 20:47:32.078810  180134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem (1082 bytes)
	I1202 20:47:32.078959  180134 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem, removing ...
	I1202 20:47:32.078977  180134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem
	I1202 20:47:32.079030  180134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem (1123 bytes)
	I1202 20:47:32.079134  180134 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem, removing ...
	I1202 20:47:32.079149  180134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem
	I1202 20:47:32.079194  180134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem (1675 bytes)
	I1202 20:47:32.079274  180134 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem org=jenkins.kindnet-019279 san=[127.0.0.1 192.168.83.176 kindnet-019279 localhost minikube]
	I1202 20:47:32.185213  180134 provision.go:177] copyRemoteCerts
	I1202 20:47:32.185281  180134 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:47:32.189047  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.189543  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.189574  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.189828  180134 sshutil.go:53] new ssh client: &{IP:192.168.83.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/kindnet-019279/id_rsa Username:docker}
	I1202 20:47:32.282915  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1202 20:47:32.313619  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1202 20:47:32.345879  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:47:32.376338  180134 provision.go:87] duration metric: took 306.701427ms to configureAuth
	I1202 20:47:32.376370  180134 buildroot.go:189] setting minikube options for container-runtime
	I1202 20:47:32.376588  180134 config.go:182] Loaded profile config "kindnet-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:32.379586  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.380091  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.380121  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.380359  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:32.380638  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:32.380667  180134 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:47:32.653857  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:47:32.653899  180134 machine.go:97] duration metric: took 995.42366ms to provisionDockerMachine
	I1202 20:47:32.653916  180134 client.go:176] duration metric: took 19.961397793s to LocalClient.Create
	I1202 20:47:32.653941  180134 start.go:167] duration metric: took 19.961478847s to libmachine.API.Create "kindnet-019279"
	I1202 20:47:32.653950  180134 start.go:293] postStartSetup for "kindnet-019279" (driver="kvm2")
	I1202 20:47:32.653962  180134 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:47:32.654036  180134 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:47:32.656850  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.657288  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.657315  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.657468  180134 sshutil.go:53] new ssh client: &{IP:192.168.83.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/kindnet-019279/id_rsa Username:docker}
	I1202 20:47:32.748696  180134 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:47:32.754455  180134 info.go:137] Remote host: Buildroot 2025.02
	I1202 20:47:32.754491  180134 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/addons for local assets ...
	I1202 20:47:32.754565  180134 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/files for local assets ...
	I1202 20:47:32.754645  180134 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem -> 1470702.pem in /etc/ssl/certs
	I1202 20:47:32.754753  180134 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:47:32.767715  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:32.799230  180134 start.go:296] duration metric: took 145.264325ms for postStartSetup
	I1202 20:47:32.803168  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.803580  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.803618  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.803995  180134 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/config.json ...
	I1202 20:47:32.804308  180134 start.go:128] duration metric: took 20.113901821s to createHost
	I1202 20:47:32.806684  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.807026  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.807054  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.807245  180134 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:32.807524  180134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.176 22 <nil> <nil>}
	I1202 20:47:32.807541  180134 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1202 20:47:32.925909  180134 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764708452.887389419
	
	I1202 20:47:32.925937  180134 fix.go:216] guest clock: 1764708452.887389419
	I1202 20:47:32.925948  180134 fix.go:229] Guest: 2025-12-02 20:47:32.887389419 +0000 UTC Remote: 2025-12-02 20:47:32.804326988 +0000 UTC m=+59.344015853 (delta=83.062431ms)
	I1202 20:47:32.925968  180134 fix.go:200] guest clock delta is within tolerance: 83.062431ms
	I1202 20:47:32.925973  180134 start.go:83] releasing machines lock for "kindnet-019279", held for 20.235712607s
	I1202 20:47:32.929648  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.930339  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.930371  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.930999  180134 ssh_runner.go:195] Run: cat /version.json
	I1202 20:47:32.931134  180134 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:47:32.935840  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.935859  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.936323  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.936343  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:32.936363  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.936375  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:32.936706  180134 sshutil.go:53] new ssh client: &{IP:192.168.83.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/kindnet-019279/id_rsa Username:docker}
	I1202 20:47:32.936728  180134 sshutil.go:53] new ssh client: &{IP:192.168.83.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/kindnet-019279/id_rsa Username:docker}
	I1202 20:47:33.054348  180134 ssh_runner.go:195] Run: systemctl --version
	I1202 20:47:33.061280  180134 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:47:33.232186  180134 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:47:33.240876  180134 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:47:33.240945  180134 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:47:33.263136  180134 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1202 20:47:33.263166  180134 start.go:496] detecting cgroup driver to use...
	I1202 20:47:33.263280  180134 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:47:33.283484  180134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:47:33.304395  180134 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:47:33.304467  180134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:47:33.324506  180134 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:47:33.342436  180134 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:47:33.507041  180134 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:47:33.748198  180134 docker.go:234] disabling docker service ...
	I1202 20:47:33.748275  180134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:47:33.771105  180134 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:47:33.791884  180134 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:47:33.953006  180134 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:47:34.109132  180134 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:47:34.124797  180134 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:47:34.147513  180134 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:47:34.147577  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.160317  180134 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:47:34.160383  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.173150  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.185560  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.198867  180134 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:47:34.211859  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.224336  180134 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.245263  180134 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:34.260844  180134 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:47:34.276504  180134 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1202 20:47:34.276569  180134 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1202 20:47:34.304585  180134 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:47:34.317393  180134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:34.459892  180134 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:47:34.576296  180134 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:47:34.576390  180134 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:47:34.582102  180134 start.go:564] Will wait 60s for crictl version
	I1202 20:47:34.582172  180134 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.586541  180134 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 20:47:34.621136  180134 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 20:47:34.621211  180134 ssh_runner.go:195] Run: crio --version
	I1202 20:47:34.654121  180134 ssh_runner.go:195] Run: crio --version
	I1202 20:47:34.684829  180134 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1202 20:47:32.509797  179993 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1202 20:47:32.524633  179993 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1202 20:47:32.556241  179993 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1202 20:47:32.556317  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:32.556365  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-019279 minikube.k8s.io/updated_at=2025_12_02T20_47_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c0dec3f81fa1d67ed4ee425e0873d0aa009f9b92 minikube.k8s.io/name=auto-019279 minikube.k8s.io/primary=true
	I1202 20:47:32.602552  179993 ops.go:34] apiserver oom_adj: -16
	I1202 20:47:32.748081  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:33.248918  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:33.748360  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:34.248791  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:34.748677  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:32.930871  180709 out.go:252] * Updating the running kvm2 "pause-892862" VM ...
	I1202 20:47:32.930918  180709 machine.go:94] provisionDockerMachine start ...
	I1202 20:47:32.935644  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:32.936189  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:32.936232  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:32.936462  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:32.936769  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:32.936786  180709 main.go:143] libmachine: About to run SSH command:
	hostname
	I1202 20:47:33.047808  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-892862
	
	I1202 20:47:33.047852  180709 buildroot.go:166] provisioning hostname "pause-892862"
	I1202 20:47:33.051443  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.052113  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.052158  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.052768  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:33.053004  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:33.053040  180709 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-892862 && echo "pause-892862" | sudo tee /etc/hostname
	I1202 20:47:33.185041  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-892862
	
	I1202 20:47:33.188630  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.189134  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.189175  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.189418  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:33.189681  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:33.189709  180709 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-892862' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-892862/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-892862' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1202 20:47:33.298881  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1202 20:47:33.298914  180709 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21997-143119/.minikube CaCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21997-143119/.minikube}
	I1202 20:47:33.298941  180709 buildroot.go:174] setting up certificates
	I1202 20:47:33.298959  180709 provision.go:84] configureAuth start
	I1202 20:47:33.302402  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.302967  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.302999  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.305549  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.305961  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.305983  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.306161  180709 provision.go:143] copyHostCerts
	I1202 20:47:33.306220  180709 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem, removing ...
	I1202 20:47:33.306233  180709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem
	I1202 20:47:33.306318  180709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/cert.pem (1123 bytes)
	I1202 20:47:33.306443  180709 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem, removing ...
	I1202 20:47:33.306453  180709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem
	I1202 20:47:33.306479  180709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/key.pem (1675 bytes)
	I1202 20:47:33.306565  180709 exec_runner.go:144] found /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem, removing ...
	I1202 20:47:33.306577  180709 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem
	I1202 20:47:33.306609  180709 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21997-143119/.minikube/ca.pem (1082 bytes)
	I1202 20:47:33.306711  180709 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem org=jenkins.pause-892862 san=[127.0.0.1 192.168.39.176 localhost minikube pause-892862]
	I1202 20:47:33.378291  180709 provision.go:177] copyRemoteCerts
	I1202 20:47:33.378348  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1202 20:47:33.380736  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.381141  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.381167  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.381324  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:33.470745  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1202 20:47:33.504748  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1202 20:47:33.541137  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1202 20:47:33.579109  180709 provision.go:87] duration metric: took 280.127807ms to configureAuth
	I1202 20:47:33.579147  180709 buildroot.go:189] setting minikube options for container-runtime
	I1202 20:47:33.579375  180709 config.go:182] Loaded profile config "pause-892862": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:33.583108  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.583711  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:33.583741  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:33.583957  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:33.584207  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:33.584224  180709 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1202 20:47:35.248415  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:35.748738  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:36.248890  179993 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1202 20:47:36.372059  179993 kubeadm.go:1114] duration metric: took 3.815824768s to wait for elevateKubeSystemPrivileges
	I1202 20:47:36.372102  179993 kubeadm.go:403] duration metric: took 17.347266589s to StartCluster
	I1202 20:47:36.372124  179993 settings.go:142] acquiring lock: {Name:mka4c337368f188b532e41dc38505f24fc351556 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:36.372219  179993 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 20:47:36.373645  179993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/kubeconfig: {Name:mk45f2610791f17b0d78039ad0468591c7331759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:36.374002  179993 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1202 20:47:36.374004  179993 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.61.205 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1202 20:47:36.374108  179993 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1202 20:47:36.374263  179993 addons.go:70] Setting storage-provisioner=true in profile "auto-019279"
	I1202 20:47:36.374288  179993 addons.go:239] Setting addon storage-provisioner=true in "auto-019279"
	I1202 20:47:36.374286  179993 config.go:182] Loaded profile config "auto-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:47:36.374328  179993 host.go:66] Checking if "auto-019279" exists ...
	I1202 20:47:36.374353  179993 addons.go:70] Setting default-storageclass=true in profile "auto-019279"
	I1202 20:47:36.374376  179993 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-019279"
	I1202 20:47:36.376150  179993 out.go:179] * Verifying Kubernetes components...
	I1202 20:47:36.377527  179993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:36.377624  179993 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1202 20:47:36.378770  179993 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:47:36.378790  179993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1202 20:47:36.379541  179993 addons.go:239] Setting addon default-storageclass=true in "auto-019279"
	I1202 20:47:36.379586  179993 host.go:66] Checking if "auto-019279" exists ...
	I1202 20:47:36.382171  179993 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1202 20:47:36.382194  179993 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1202 20:47:36.382923  179993 main.go:143] libmachine: domain auto-019279 has defined MAC address 52:54:00:24:7d:16 in network mk-auto-019279
	I1202 20:47:36.383757  179993 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:7d:16", ip: ""} in network mk-auto-019279: {Iface:virbr3 ExpiryTime:2025-12-02 21:47:07 +0000 UTC Type:0 Mac:52:54:00:24:7d:16 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:auto-019279 Clientid:01:52:54:00:24:7d:16}
	I1202 20:47:36.383793  179993 main.go:143] libmachine: domain auto-019279 has defined IP address 192.168.61.205 and MAC address 52:54:00:24:7d:16 in network mk-auto-019279
	I1202 20:47:36.384104  179993 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/auto-019279/id_rsa Username:docker}
	I1202 20:47:36.386652  179993 main.go:143] libmachine: domain auto-019279 has defined MAC address 52:54:00:24:7d:16 in network mk-auto-019279
	I1202 20:47:36.387337  179993 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:24:7d:16", ip: ""} in network mk-auto-019279: {Iface:virbr3 ExpiryTime:2025-12-02 21:47:07 +0000 UTC Type:0 Mac:52:54:00:24:7d:16 Iaid: IPaddr:192.168.61.205 Prefix:24 Hostname:auto-019279 Clientid:01:52:54:00:24:7d:16}
	I1202 20:47:36.387381  179993 main.go:143] libmachine: domain auto-019279 has defined IP address 192.168.61.205 and MAC address 52:54:00:24:7d:16 in network mk-auto-019279
	I1202 20:47:36.387588  179993 sshutil.go:53] new ssh client: &{IP:192.168.61.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/auto-019279/id_rsa Username:docker}
	I1202 20:47:36.571641  179993 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1202 20:47:36.684683  179993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:47:36.875873  179993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1202 20:47:36.920118  179993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1202 20:47:37.440753  179993 start.go:977] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1202 20:47:37.443756  179993 node_ready.go:35] waiting up to 15m0s for node "auto-019279" to be "Ready" ...
	I1202 20:47:37.473156  179993 node_ready.go:49] node "auto-019279" is "Ready"
	I1202 20:47:37.473189  179993 node_ready.go:38] duration metric: took 29.40163ms for node "auto-019279" to be "Ready" ...
	I1202 20:47:37.473203  179993 api_server.go:52] waiting for apiserver process to appear ...
	I1202 20:47:37.473255  179993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:47:37.949703  179993 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-019279" context rescaled to 1 replicas
	I1202 20:47:38.097048  179993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.221126786s)
	I1202 20:47:38.097095  179993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.176941722s)
	I1202 20:47:38.097176  179993 api_server.go:72] duration metric: took 1.723134624s to wait for apiserver process to appear ...
	I1202 20:47:38.097217  179993 api_server.go:88] waiting for apiserver healthz status ...
	I1202 20:47:38.097240  179993 api_server.go:253] Checking apiserver healthz at https://192.168.61.205:8443/healthz ...
	I1202 20:47:38.114469  179993 api_server.go:279] https://192.168.61.205:8443/healthz returned 200:
	ok
	I1202 20:47:38.118999  179993 api_server.go:141] control plane version: v1.34.2
	I1202 20:47:38.119032  179993 api_server.go:131] duration metric: took 21.805987ms to wait for apiserver health ...
	I1202 20:47:38.119128  179993 system_pods.go:43] waiting for kube-system pods to appear ...
	I1202 20:47:38.120739  179993 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1202 20:47:34.774189  176698 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I1202 20:47:34.775043  176698 api_server.go:269] stopped: https://192.168.72.13:8443/healthz: Get "https://192.168.72.13:8443/healthz": dial tcp 192.168.72.13:8443: connect: connection refused
	I1202 20:47:34.775116  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:47:34.775192  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:47:34.818474  176698 cri.go:89] found id: "2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:34.818504  176698 cri.go:89] found id: ""
	I1202 20:47:34.818515  176698 logs.go:282] 1 containers: [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf]
	I1202 20:47:34.818584  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.822986  176698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:47:34.823088  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:47:34.874618  176698 cri.go:89] found id: "5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:34.874651  176698 cri.go:89] found id: ""
	I1202 20:47:34.874681  176698 logs.go:282] 1 containers: [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590]
	I1202 20:47:34.874765  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.879383  176698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:47:34.879459  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:47:34.922937  176698 cri.go:89] found id: "eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:34.922964  176698 cri.go:89] found id: "130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:34.922971  176698 cri.go:89] found id: ""
	I1202 20:47:34.922982  176698 logs.go:282] 2 containers: [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0]
	I1202 20:47:34.923055  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.928248  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.933324  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:47:34.933402  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:47:34.978298  176698 cri.go:89] found id: "c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:34.978323  176698 cri.go:89] found id: "0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:34.978330  176698 cri.go:89] found id: ""
	I1202 20:47:34.978340  176698 logs.go:282] 2 containers: [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1]
	I1202 20:47:34.978410  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.983977  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:34.988784  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:47:34.988859  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:47:35.036448  176698 cri.go:89] found id: "5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:35.036480  176698 cri.go:89] found id: ""
	I1202 20:47:35.036496  176698 logs.go:282] 1 containers: [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e]
	I1202 20:47:35.036568  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:35.041667  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:47:35.041749  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:47:35.099760  176698 cri.go:89] found id: "44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:35.099790  176698 cri.go:89] found id: ""
	I1202 20:47:35.099801  176698 logs.go:282] 1 containers: [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655]
	I1202 20:47:35.099883  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:35.106030  176698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:47:35.106123  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:47:35.157596  176698 cri.go:89] found id: ""
	I1202 20:47:35.157633  176698 logs.go:282] 0 containers: []
	W1202 20:47:35.157646  176698 logs.go:284] No container was found matching "kindnet"
	I1202 20:47:35.157685  176698 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:47:35.157763  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:47:35.207306  176698 cri.go:89] found id: "2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:35.207347  176698 cri.go:89] found id: ""
	I1202 20:47:35.207360  176698 logs.go:282] 1 containers: [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395]
	I1202 20:47:35.207445  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:35.213574  176698 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:47:35.213611  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:47:35.317251  176698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:47:35.317331  176698 logs.go:123] Gathering logs for kube-apiserver [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf] ...
	I1202 20:47:35.317352  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:35.377824  176698 logs.go:123] Gathering logs for etcd [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590] ...
	I1202 20:47:35.377865  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:35.433125  176698 logs.go:123] Gathering logs for coredns [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4] ...
	I1202 20:47:35.433167  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:35.495814  176698 logs.go:123] Gathering logs for kube-scheduler [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400] ...
	I1202 20:47:35.495858  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:35.611102  176698 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:47:35.611154  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:47:36.137070  176698 logs.go:123] Gathering logs for container status ...
	I1202 20:47:36.137111  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:47:36.189784  176698 logs.go:123] Gathering logs for kubelet ...
	I1202 20:47:36.189831  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:47:36.283354  176698 logs.go:123] Gathering logs for dmesg ...
	I1202 20:47:36.283395  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:47:36.302621  176698 logs.go:123] Gathering logs for coredns [130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0] ...
	I1202 20:47:36.302669  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:36.347105  176698 logs.go:123] Gathering logs for kube-scheduler [0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1] ...
	I1202 20:47:36.347146  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:36.389059  176698 logs.go:123] Gathering logs for kube-proxy [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e] ...
	I1202 20:47:36.389098  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:36.442604  176698 logs.go:123] Gathering logs for kube-controller-manager [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655] ...
	I1202 20:47:36.442638  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:36.494136  176698 logs.go:123] Gathering logs for storage-provisioner [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395] ...
	I1202 20:47:36.494164  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:34.689208  180134 main.go:143] libmachine: domain kindnet-019279 has defined MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:34.689627  180134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:bb:87", ip: ""} in network mk-kindnet-019279: {Iface:virbr5 ExpiryTime:2025-12-02 21:47:29 +0000 UTC Type:0 Mac:52:54:00:48:bb:87 Iaid: IPaddr:192.168.83.176 Prefix:24 Hostname:kindnet-019279 Clientid:01:52:54:00:48:bb:87}
	I1202 20:47:34.689673  180134 main.go:143] libmachine: domain kindnet-019279 has defined IP address 192.168.83.176 and MAC address 52:54:00:48:bb:87 in network mk-kindnet-019279
	I1202 20:47:34.689897  180134 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1202 20:47:34.694931  180134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:47:34.710536  180134 kubeadm.go:884] updating cluster {Name:kindnet-019279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:kindnet-019279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.83.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:47:34.710702  180134 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:47:34.710756  180134 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:34.742132  180134 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1202 20:47:34.742237  180134 ssh_runner.go:195] Run: which lz4
	I1202 20:47:34.746981  180134 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1202 20:47:34.752139  180134 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1202 20:47:34.752177  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1202 20:47:36.124481  180134 crio.go:462] duration metric: took 1.377530742s to copy over tarball
	I1202 20:47:36.124586  180134 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1202 20:47:37.902136  180134 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.777508517s)
	I1202 20:47:37.902187  180134 crio.go:469] duration metric: took 1.777659621s to extract the tarball
	I1202 20:47:37.902197  180134 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1202 20:47:37.944478  180134 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:38.002940  180134 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:47:38.002964  180134 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:47:38.002971  180134 kubeadm.go:935] updating node { 192.168.83.176 8443 v1.34.2 crio true true} ...
	I1202 20:47:38.003064  180134 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-019279 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-019279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1202 20:47:38.003133  180134 ssh_runner.go:195] Run: crio config
	I1202 20:47:38.074332  180134 cni.go:84] Creating CNI manager for "kindnet"
	I1202 20:47:38.074374  180134 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:47:38.074410  180134 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.176 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-019279 NodeName:kindnet-019279 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:47:38.074567  180134 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-019279"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:47:38.074665  180134 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:47:38.089202  180134 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:47:38.089274  180134 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:47:38.103041  180134 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1202 20:47:38.131005  180134 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:47:38.156015  180134 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1202 20:47:38.182702  180134 ssh_runner.go:195] Run: grep 192.168.83.176	control-plane.minikube.internal$ /etc/hosts
	I1202 20:47:38.187283  180134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1202 20:47:38.208995  180134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:38.370415  180134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:47:38.392535  180134 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279 for IP: 192.168.83.176
	I1202 20:47:38.392573  180134 certs.go:195] generating shared ca certs ...
	I1202 20:47:38.392600  180134 certs.go:227] acquiring lock for ca certs: {Name:mk4d0a32f0604330372f61cbe35af2ea6f3b6c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.392841  180134 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key
	I1202 20:47:38.392923  180134 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key
	I1202 20:47:38.392940  180134 certs.go:257] generating profile certs ...
	I1202 20:47:38.393027  180134 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.key
	I1202 20:47:38.393047  180134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt with IP's: []
	I1202 20:47:38.122006  179993 addons.go:530] duration metric: took 1.74789337s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1202 20:47:38.127703  179993 system_pods.go:59] 8 kube-system pods found
	I1202 20:47:38.127768  179993 system_pods.go:61] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.127789  179993 system_pods.go:61] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.127802  179993 system_pods.go:61] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:38.127826  179993 system_pods.go:61] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:38.127843  179993 system_pods.go:61] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:38.127857  179993 system_pods.go:61] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:47:38.127869  179993 system_pods.go:61] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:38.127876  179993 system_pods.go:61] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Pending
	I1202 20:47:38.127889  179993 system_pods.go:74] duration metric: took 8.737313ms to wait for pod list to return data ...
	I1202 20:47:38.127903  179993 default_sa.go:34] waiting for default service account to be created ...
	I1202 20:47:38.135830  179993 default_sa.go:45] found service account: "default"
	I1202 20:47:38.135860  179993 default_sa.go:55] duration metric: took 7.947755ms for default service account to be created ...
	I1202 20:47:38.135873  179993 system_pods.go:116] waiting for k8s-apps to be running ...
	I1202 20:47:38.146214  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:38.146253  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.146263  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.146273  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:38.146283  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:38.146295  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:38.146308  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1202 20:47:38.146319  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:38.146328  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Pending
	I1202 20:47:38.146365  179993 retry.go:31] will retry after 265.55355ms: missing components: kube-dns, kube-proxy
	I1202 20:47:38.424516  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:38.424559  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.424570  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.424675  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:38.424694  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:38.424707  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:38.424717  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Running
	I1202 20:47:38.424726  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:38.424734  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:47:38.424755  179993 retry.go:31] will retry after 269.333893ms: missing components: kube-dns
	I1202 20:47:38.699540  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:38.699600  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.699613  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:38.699623  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:38.699634  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:38.699648  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:38.699677  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Running
	I1202 20:47:38.699692  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:38.699699  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:47:38.699722  179993 retry.go:31] will retry after 479.698489ms: missing components: kube-dns
	I1202 20:47:39.210986  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:39.211050  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:39.211064  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:39.211076  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:39.211088  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:39.211102  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:39.211113  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Running
	I1202 20:47:39.211128  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:39.211136  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:47:39.211160  179993 retry.go:31] will retry after 380.187566ms: missing components: kube-dns
	I1202 20:47:39.597001  179993 system_pods.go:86] 8 kube-system pods found
	I1202 20:47:39.597040  179993 system_pods.go:89] "coredns-66bc5c9577-82fs6" [e6845286-b116-4f6f-bf4e-b78a6a6def60] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:39.597049  179993 system_pods.go:89] "coredns-66bc5c9577-88m47" [b9c11de6-abb8-4d77-b1f1-982be301e7ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1202 20:47:39.597058  179993 system_pods.go:89] "etcd-auto-019279" [170b4c31-9be1-4955-a948-201f373de427] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1202 20:47:39.597064  179993 system_pods.go:89] "kube-apiserver-auto-019279" [3ebab6b1-b1df-49de-9df8-f446bde8e4a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1202 20:47:39.597073  179993 system_pods.go:89] "kube-controller-manager-auto-019279" [e20c6da4-12e5-499f-8de2-c32a820118ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1202 20:47:39.597078  179993 system_pods.go:89] "kube-proxy-d2t4c" [b17c97c6-3667-4a85-bfd7-f17c67772e93] Running
	I1202 20:47:39.597088  179993 system_pods.go:89] "kube-scheduler-auto-019279" [ec5403b4-b7c5-47b4-a788-1a9ac4a3b763] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1202 20:47:39.597095  179993 system_pods.go:89] "storage-provisioner" [97dd7596-991e-4066-9834-563afecb5b49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1202 20:47:39.597106  179993 system_pods.go:126] duration metric: took 1.461225378s to wait for k8s-apps to be running ...
	I1202 20:47:39.597121  179993 system_svc.go:44] waiting for kubelet service to be running ....
	I1202 20:47:39.597174  179993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:47:39.621315  179993 system_svc.go:56] duration metric: took 24.179703ms WaitForService to wait for kubelet
	I1202 20:47:39.621355  179993 kubeadm.go:587] duration metric: took 3.24731856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1202 20:47:39.621380  179993 node_conditions.go:102] verifying NodePressure condition ...
	I1202 20:47:39.626571  179993 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1202 20:47:39.626607  179993 node_conditions.go:123] node cpu capacity is 2
	I1202 20:47:39.626627  179993 node_conditions.go:105] duration metric: took 5.239878ms to run NodePressure ...
	I1202 20:47:39.626643  179993 start.go:242] waiting for startup goroutines ...
	I1202 20:47:39.626675  179993 start.go:247] waiting for cluster config update ...
	I1202 20:47:39.626693  179993 start.go:256] writing updated cluster config ...
	I1202 20:47:39.636799  179993 ssh_runner.go:195] Run: rm -f paused
	I1202 20:47:39.645606  179993 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1202 20:47:39.651753  179993 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-82fs6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:47:39.214350  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1202 20:47:39.214378  180709 machine.go:97] duration metric: took 6.283447535s to provisionDockerMachine
	I1202 20:47:39.214393  180709 start.go:293] postStartSetup for "pause-892862" (driver="kvm2")
	I1202 20:47:39.214406  180709 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1202 20:47:39.214474  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1202 20:47:39.219158  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.219732  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.219770  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.220034  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:39.314156  180709 ssh_runner.go:195] Run: cat /etc/os-release
	I1202 20:47:39.320511  180709 info.go:137] Remote host: Buildroot 2025.02
	I1202 20:47:39.320551  180709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/addons for local assets ...
	I1202 20:47:39.320667  180709 filesync.go:126] Scanning /home/jenkins/minikube-integration/21997-143119/.minikube/files for local assets ...
	I1202 20:47:39.320779  180709 filesync.go:149] local asset: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem -> 1470702.pem in /etc/ssl/certs
	I1202 20:47:39.320906  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1202 20:47:39.340926  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:39.382551  180709 start.go:296] duration metric: took 168.137636ms for postStartSetup
	I1202 20:47:39.382618  180709 fix.go:56] duration metric: took 6.456440939s for fixHost
	I1202 20:47:39.386893  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.387430  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.387478  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.387794  180709 main.go:143] libmachine: Using SSH client type: native
	I1202 20:47:39.388131  180709 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1202 20:47:39.388152  180709 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1202 20:47:39.503084  180709 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764708459.496621228
	
	I1202 20:47:39.503107  180709 fix.go:216] guest clock: 1764708459.496621228
	I1202 20:47:39.503116  180709 fix.go:229] Guest: 2025-12-02 20:47:39.496621228 +0000 UTC Remote: 2025-12-02 20:47:39.382625482 +0000 UTC m=+9.271396085 (delta=113.995746ms)
	I1202 20:47:39.503140  180709 fix.go:200] guest clock delta is within tolerance: 113.995746ms
	I1202 20:47:39.503147  180709 start.go:83] releasing machines lock for "pause-892862", held for 6.576997859s
	I1202 20:47:39.506571  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.507124  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.507156  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.507824  180709 ssh_runner.go:195] Run: cat /version.json
	I1202 20:47:39.507913  180709 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1202 20:47:39.511523  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.511852  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.512084  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.512119  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.512311  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:39.512328  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:39.512358  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:39.512566  180709 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/pause-892862/id_rsa Username:docker}
	I1202 20:47:39.599611  180709 ssh_runner.go:195] Run: systemctl --version
	I1202 20:47:39.639739  180709 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1202 20:47:39.801939  180709 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1202 20:47:39.813366  180709 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1202 20:47:39.813453  180709 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1202 20:47:39.825610  180709 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1202 20:47:39.825642  180709 start.go:496] detecting cgroup driver to use...
	I1202 20:47:39.825772  180709 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1202 20:47:39.851955  180709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1202 20:47:39.871192  180709 docker.go:218] disabling cri-docker service (if available) ...
	I1202 20:47:39.871265  180709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1202 20:47:39.893578  180709 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1202 20:47:39.915897  180709 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1202 20:47:40.157168  180709 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1202 20:47:40.351772  180709 docker.go:234] disabling docker service ...
	I1202 20:47:40.351857  180709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1202 20:47:40.382162  180709 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1202 20:47:40.400292  180709 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1202 20:47:40.619600  180709 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1202 20:47:40.818294  180709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1202 20:47:40.836375  180709 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1202 20:47:40.862872  180709 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1202 20:47:40.862953  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.876930  180709 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1202 20:47:40.877005  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.892088  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.905117  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.917965  180709 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1202 20:47:40.932792  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.945233  180709 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.959143  180709 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1202 20:47:40.971613  180709 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1202 20:47:40.982500  180709 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1202 20:47:40.994339  180709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:41.169910  180709 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1202 20:47:41.484137  180709 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1202 20:47:41.484220  180709 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1202 20:47:41.489514  180709 start.go:564] Will wait 60s for crictl version
	I1202 20:47:41.489573  180709 ssh_runner.go:195] Run: which crictl
	I1202 20:47:41.493586  180709 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1202 20:47:41.525318  180709 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1202 20:47:41.525408  180709 ssh_runner.go:195] Run: crio --version
	I1202 20:47:41.556371  180709 ssh_runner.go:195] Run: crio --version
	I1202 20:47:41.587171  180709 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1202 20:47:39.039828  176698 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I1202 20:47:39.040546  176698 api_server.go:269] stopped: https://192.168.72.13:8443/healthz: Get "https://192.168.72.13:8443/healthz": dial tcp 192.168.72.13:8443: connect: connection refused
	I1202 20:47:39.040617  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:47:39.040709  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:47:39.093195  176698 cri.go:89] found id: "2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:39.093222  176698 cri.go:89] found id: ""
	I1202 20:47:39.093234  176698 logs.go:282] 1 containers: [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf]
	I1202 20:47:39.093303  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.100565  176698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:47:39.100681  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:47:39.154481  176698 cri.go:89] found id: "5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:39.154512  176698 cri.go:89] found id: ""
	I1202 20:47:39.154522  176698 logs.go:282] 1 containers: [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590]
	I1202 20:47:39.154590  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.159685  176698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:47:39.159776  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:47:39.206790  176698 cri.go:89] found id: "eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:39.206824  176698 cri.go:89] found id: "130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:39.206831  176698 cri.go:89] found id: ""
	I1202 20:47:39.206843  176698 logs.go:282] 2 containers: [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0]
	I1202 20:47:39.206939  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.212118  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.218570  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:47:39.218642  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:47:39.272598  176698 cri.go:89] found id: "c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:39.272627  176698 cri.go:89] found id: "0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:39.272633  176698 cri.go:89] found id: ""
	I1202 20:47:39.272645  176698 logs.go:282] 2 containers: [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1]
	I1202 20:47:39.272746  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.277891  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.283773  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:47:39.283875  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:47:39.337937  176698 cri.go:89] found id: "5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:39.337971  176698 cri.go:89] found id: ""
	I1202 20:47:39.337983  176698 logs.go:282] 1 containers: [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e]
	I1202 20:47:39.338054  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.343828  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:47:39.343905  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:47:39.393256  176698 cri.go:89] found id: "44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:39.393274  176698 cri.go:89] found id: ""
	I1202 20:47:39.393285  176698 logs.go:282] 1 containers: [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655]
	I1202 20:47:39.393350  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.399324  176698 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1202 20:47:39.399410  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1202 20:47:39.442159  176698 cri.go:89] found id: ""
	I1202 20:47:39.442196  176698 logs.go:282] 0 containers: []
	W1202 20:47:39.442211  176698 logs.go:284] No container was found matching "kindnet"
	I1202 20:47:39.442219  176698 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1202 20:47:39.442292  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1202 20:47:39.482048  176698 cri.go:89] found id: "2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:39.482077  176698 cri.go:89] found id: ""
	I1202 20:47:39.482089  176698 logs.go:282] 1 containers: [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395]
	I1202 20:47:39.482146  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:39.486424  176698 logs.go:123] Gathering logs for kube-apiserver [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf] ...
	I1202 20:47:39.486447  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:39.538158  176698 logs.go:123] Gathering logs for coredns [130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0] ...
	I1202 20:47:39.538200  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:39.577057  176698 logs.go:123] Gathering logs for kube-scheduler [0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1] ...
	I1202 20:47:39.577102  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:39.624196  176698 logs.go:123] Gathering logs for kube-proxy [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e] ...
	I1202 20:47:39.624247  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:39.673984  176698 logs.go:123] Gathering logs for kube-controller-manager [44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655] ...
	I1202 20:47:39.674019  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 44587e47899f67f6b107e0aa8722dce81c0f245d29901dc8ee58d8d6d5703655"
	I1202 20:47:39.718332  176698 logs.go:123] Gathering logs for CRI-O ...
	I1202 20:47:39.718366  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1202 20:47:40.184219  176698 logs.go:123] Gathering logs for kubelet ...
	I1202 20:47:40.184284  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1202 20:47:40.292863  176698 logs.go:123] Gathering logs for dmesg ...
	I1202 20:47:40.292913  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1202 20:47:40.312603  176698 logs.go:123] Gathering logs for describe nodes ...
	I1202 20:47:40.312667  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1202 20:47:40.399412  176698 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1202 20:47:40.399439  176698 logs.go:123] Gathering logs for etcd [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590] ...
	I1202 20:47:40.399456  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:40.451748  176698 logs.go:123] Gathering logs for coredns [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4] ...
	I1202 20:47:40.451787  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:40.508674  176698 logs.go:123] Gathering logs for kube-scheduler [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400] ...
	I1202 20:47:40.508721  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:40.583006  176698 logs.go:123] Gathering logs for storage-provisioner [2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395] ...
	I1202 20:47:40.583053  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b78be849a934d05d8815029eb5f512e8381bb91a6bb33dbc3d75d7b3993c395"
	I1202 20:47:40.623067  176698 logs.go:123] Gathering logs for container status ...
	I1202 20:47:40.623104  176698 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1202 20:47:43.179032  176698 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I1202 20:47:43.179761  176698 api_server.go:269] stopped: https://192.168.72.13:8443/healthz: Get "https://192.168.72.13:8443/healthz": dial tcp 192.168.72.13:8443: connect: connection refused
	I1202 20:47:43.179828  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1202 20:47:43.179908  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1202 20:47:43.230743  176698 cri.go:89] found id: "2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf"
	I1202 20:47:43.230772  176698 cri.go:89] found id: ""
	I1202 20:47:43.230783  176698 logs.go:282] 1 containers: [2fab31c9d276f0542ecd30bea0e43b98cd1b8bb5f44888d7fae79bace105dccf]
	I1202 20:47:43.230859  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.237813  176698 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1202 20:47:43.237921  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1202 20:47:43.293974  176698 cri.go:89] found id: "5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590"
	I1202 20:47:43.294004  176698 cri.go:89] found id: ""
	I1202 20:47:43.294016  176698 logs.go:282] 1 containers: [5acd40fa1903863cddef605d26815ca1f8ddeb5f3cb911da6d42caf29639b590]
	I1202 20:47:43.294090  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.299156  176698 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1202 20:47:43.299239  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1202 20:47:43.342665  176698 cri.go:89] found id: "eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4"
	I1202 20:47:43.342696  176698 cri.go:89] found id: "130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0"
	I1202 20:47:43.342702  176698 cri.go:89] found id: ""
	I1202 20:47:43.342713  176698 logs.go:282] 2 containers: [eb317a23e37ee4ddc903cd319d7fbf43db3d9a2299820af4eb2555548abea3c4 130d7f63f6cfad5167a94e7615d2d3a26c30bd39fce4b1425274fb51fce8eea0]
	I1202 20:47:43.342779  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.347450  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.353853  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1202 20:47:43.353929  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1202 20:47:43.401373  176698 cri.go:89] found id: "c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400"
	I1202 20:47:43.401399  176698 cri.go:89] found id: "0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1"
	I1202 20:47:43.401404  176698 cri.go:89] found id: ""
	I1202 20:47:43.401413  176698 logs.go:282] 2 containers: [c04371da6c19fdf042cf7b44fe6863a62353cb6ed3c509cb7582b00dd6dd5400 0b272fdbe31bea865176b13a626237bca6571733cea6d7d63e1b305883a2fac1]
	I1202 20:47:43.401492  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.407225  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.413187  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1202 20:47:43.413286  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1202 20:47:43.462833  176698 cri.go:89] found id: "5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e"
	I1202 20:47:43.462863  176698 cri.go:89] found id: ""
	I1202 20:47:43.462875  176698 logs.go:282] 1 containers: [5a0be607231fb93ed500e570ee33b45ee68711701a29a8f85a2d92fff42b430e]
	I1202 20:47:43.462973  176698 ssh_runner.go:195] Run: which crictl
	I1202 20:47:43.467815  176698 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1202 20:47:43.467877  176698 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1202 20:47:38.517609  180134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt ...
	I1202 20:47:38.517641  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: {Name:mkc7f205ec973991f73503e30764038e4ada8e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.517873  180134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.key ...
	I1202 20:47:38.517910  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.key: {Name:mk2c0c42c8e2faf6c33f82f7062fcea7c70eb537 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.518428  180134 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key.d8483bf8
	I1202 20:47:38.518446  180134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt.d8483bf8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.176]
	I1202 20:47:38.630770  180134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt.d8483bf8 ...
	I1202 20:47:38.630814  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt.d8483bf8: {Name:mkf045233bd573a7e62274ff983643c2ac949c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.631006  180134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key.d8483bf8 ...
	I1202 20:47:38.631022  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key.d8483bf8: {Name:mk4f77b6cab7d226418b939b0450fa455bbf0e92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.631119  180134 certs.go:382] copying /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt.d8483bf8 -> /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt
	I1202 20:47:38.631196  180134 certs.go:386] copying /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key.d8483bf8 -> /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key
	I1202 20:47:38.631257  180134 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.key
	I1202 20:47:38.631275  180134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.crt with IP's: []
	I1202 20:47:38.752106  180134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.crt ...
	I1202 20:47:38.752154  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.crt: {Name:mkc1bd0a8f67665c6e8bb74f5995e7b732daf6da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.752398  180134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.key ...
	I1202 20:47:38.752425  180134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.key: {Name:mk66e494b67bdc506da1a63544b545c9295a12bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:38.752742  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem (1338 bytes)
	W1202 20:47:38.752803  180134 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070_empty.pem, impossibly tiny 0 bytes
	I1202 20:47:38.752818  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:47:38.752855  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:47:38.752896  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:47:38.752935  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem (1675 bytes)
	I1202 20:47:38.753008  180134 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:38.753633  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:47:38.792862  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:47:38.832680  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:47:38.868513  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:47:38.904790  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 20:47:38.941194  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1202 20:47:38.970544  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:47:39.004988  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:47:39.036607  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /usr/share/ca-certificates/1470702.pem (1708 bytes)
	I1202 20:47:39.071483  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:47:39.129826  180134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem --> /usr/share/ca-certificates/147070.pem (1338 bytes)
	I1202 20:47:39.185426  180134 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:47:39.217090  180134 ssh_runner.go:195] Run: openssl version
	I1202 20:47:39.225465  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470702.pem && ln -fs /usr/share/ca-certificates/1470702.pem /etc/ssl/certs/1470702.pem"
	I1202 20:47:39.243652  180134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470702.pem
	I1202 20:47:39.250475  180134 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:57 /usr/share/ca-certificates/1470702.pem
	I1202 20:47:39.250548  180134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470702.pem
	I1202 20:47:39.261227  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1470702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:47:39.279022  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:47:39.299221  180134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:39.306634  180134 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:45 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:39.306750  180134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:39.315199  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:47:39.332778  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147070.pem && ln -fs /usr/share/ca-certificates/147070.pem /etc/ssl/certs/147070.pem"
	I1202 20:47:39.351082  180134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147070.pem
	I1202 20:47:39.358595  180134 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:57 /usr/share/ca-certificates/147070.pem
	I1202 20:47:39.358705  180134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147070.pem
	I1202 20:47:39.369623  180134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/147070.pem /etc/ssl/certs/51391683.0"
	I1202 20:47:39.386252  180134 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:47:39.393143  180134 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1202 20:47:39.393207  180134 kubeadm.go:401] StartCluster: {Name:kindnet-019279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:kindnet-019279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.83.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:39.393302  180134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:47:39.393356  180134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:47:39.432963  180134 cri.go:89] found id: ""
	I1202 20:47:39.433056  180134 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1202 20:47:39.448092  180134 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1202 20:47:39.461479  180134 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1202 20:47:39.474230  180134 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1202 20:47:39.474259  180134 kubeadm.go:158] found existing configuration files:
	
	I1202 20:47:39.474333  180134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1202 20:47:39.488163  180134 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1202 20:47:39.488236  180134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1202 20:47:39.500858  180134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1202 20:47:39.515643  180134 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1202 20:47:39.515722  180134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1202 20:47:39.531894  180134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1202 20:47:39.545190  180134 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1202 20:47:39.545262  180134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1202 20:47:39.558587  180134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1202 20:47:39.569550  180134 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1202 20:47:39.569622  180134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1202 20:47:39.581810  180134 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1202 20:47:39.786572  180134 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1202 20:47:41.657830  179993 pod_ready.go:104] pod "coredns-66bc5c9577-82fs6" is not "Ready", error: <nil>
	I1202 20:47:42.161919  179993 pod_ready.go:94] pod "coredns-66bc5c9577-82fs6" is "Ready"
	I1202 20:47:42.161954  179993 pod_ready.go:86] duration metric: took 2.510175032s for pod "coredns-66bc5c9577-82fs6" in "kube-system" namespace to be "Ready" or be gone ...
	I1202 20:47:42.161967  179993 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-88m47" in "kube-system" namespace to be "Ready" or be gone ...
	W1202 20:47:44.171831  179993 pod_ready.go:104] pod "coredns-66bc5c9577-88m47" is not "Ready", error: <nil>
	I1202 20:47:41.591217  180709 main.go:143] libmachine: domain pause-892862 has defined MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:41.591703  180709 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9e:10:a2", ip: ""} in network mk-pause-892862: {Iface:virbr1 ExpiryTime:2025-12-02 21:46:47 +0000 UTC Type:0 Mac:52:54:00:9e:10:a2 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:pause-892862 Clientid:01:52:54:00:9e:10:a2}
	I1202 20:47:41.591730  180709 main.go:143] libmachine: domain pause-892862 has defined IP address 192.168.39.176 and MAC address 52:54:00:9e:10:a2 in network mk-pause-892862
	I1202 20:47:41.591928  180709 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1202 20:47:41.596712  180709 kubeadm.go:884] updating cluster {Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1202 20:47:41.596857  180709 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 20:47:41.596919  180709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:41.640327  180709 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:47:41.640362  180709 crio.go:433] Images already preloaded, skipping extraction
	I1202 20:47:41.640430  180709 ssh_runner.go:195] Run: sudo crictl images --output json
	I1202 20:47:41.679399  180709 crio.go:514] all images are preloaded for cri-o runtime.
	I1202 20:47:41.679421  180709 cache_images.go:86] Images are preloaded, skipping loading
	I1202 20:47:41.679428  180709 kubeadm.go:935] updating node { 192.168.39.176 8443 v1.34.2 crio true true} ...
	I1202 20:47:41.679522  180709 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-892862 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1202 20:47:41.679586  180709 ssh_runner.go:195] Run: crio config
	I1202 20:47:41.728823  180709 cni.go:84] Creating CNI manager for ""
	I1202 20:47:41.728896  180709 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 20:47:41.728935  180709 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1202 20:47:41.728988  180709 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-892862 NodeName:pause-892862 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1202 20:47:41.729271  180709 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-892862"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1202 20:47:41.729355  180709 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1202 20:47:41.744415  180709 binaries.go:51] Found k8s binaries, skipping transfer
	I1202 20:47:41.744505  180709 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1202 20:47:41.758529  180709 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1202 20:47:41.787792  180709 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1202 20:47:41.811286  180709 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1202 20:47:41.832600  180709 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I1202 20:47:41.836814  180709 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1202 20:47:42.006895  180709 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1202 20:47:42.027123  180709 certs.go:69] Setting up /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862 for IP: 192.168.39.176
	I1202 20:47:42.027153  180709 certs.go:195] generating shared ca certs ...
	I1202 20:47:42.027177  180709 certs.go:227] acquiring lock for ca certs: {Name:mk4d0a32f0604330372f61cbe35af2ea6f3b6c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1202 20:47:42.027375  180709 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key
	I1202 20:47:42.027422  180709 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key
	I1202 20:47:42.027429  180709 certs.go:257] generating profile certs ...
	I1202 20:47:42.027518  180709 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/client.key
	I1202 20:47:42.027573  180709 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/apiserver.key.c6c045af
	I1202 20:47:42.027608  180709 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/proxy-client.key
	I1202 20:47:42.027757  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem (1338 bytes)
	W1202 20:47:42.027788  180709 certs.go:480] ignoring /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070_empty.pem, impossibly tiny 0 bytes
	I1202 20:47:42.027794  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca-key.pem (1675 bytes)
	I1202 20:47:42.027818  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/ca.pem (1082 bytes)
	I1202 20:47:42.027840  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/cert.pem (1123 bytes)
	I1202 20:47:42.027867  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/certs/key.pem (1675 bytes)
	I1202 20:47:42.027933  180709 certs.go:484] found cert: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem (1708 bytes)
	I1202 20:47:42.028560  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1202 20:47:42.065172  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1202 20:47:42.098807  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1202 20:47:42.133019  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1202 20:47:42.169561  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1202 20:47:42.204528  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1202 20:47:42.246305  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1202 20:47:42.361413  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/pause-892862/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1202 20:47:42.430775  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1202 20:47:42.493156  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/certs/147070.pem --> /usr/share/ca-certificates/147070.pem (1338 bytes)
	I1202 20:47:42.571562  180709 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/ssl/certs/1470702.pem --> /usr/share/ca-certificates/1470702.pem (1708 bytes)
	I1202 20:47:42.666413  180709 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1202 20:47:42.756478  180709 ssh_runner.go:195] Run: openssl version
	I1202 20:47:42.770080  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1202 20:47:42.796346  180709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:42.806695  180709 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  2 19:45 /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:42.806795  180709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1202 20:47:42.822001  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1202 20:47:42.850457  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/147070.pem && ln -fs /usr/share/ca-certificates/147070.pem /etc/ssl/certs/147070.pem"
	I1202 20:47:42.874641  180709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/147070.pem
	I1202 20:47:42.890747  180709 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  2 19:57 /usr/share/ca-certificates/147070.pem
	I1202 20:47:42.890825  180709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/147070.pem
	I1202 20:47:42.904269  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/147070.pem /etc/ssl/certs/51391683.0"
	I1202 20:47:42.931175  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470702.pem && ln -fs /usr/share/ca-certificates/1470702.pem /etc/ssl/certs/1470702.pem"
	I1202 20:47:42.979186  180709 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470702.pem
	I1202 20:47:42.999319  180709 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  2 19:57 /usr/share/ca-certificates/1470702.pem
	I1202 20:47:42.999403  180709 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470702.pem
	I1202 20:47:43.013677  180709 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1470702.pem /etc/ssl/certs/3ec20f2e.0"
	I1202 20:47:43.040096  180709 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1202 20:47:43.057377  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1202 20:47:43.073530  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1202 20:47:43.088015  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1202 20:47:43.103880  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1202 20:47:43.115613  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1202 20:47:43.126096  180709 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1202 20:47:43.137582  180709 kubeadm.go:401] StartCluster: {Name:pause-892862 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-892862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 20:47:43.137716  180709 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1202 20:47:43.137772  180709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1202 20:47:43.241476  180709 cri.go:89] found id: "5ae303b8cd9929f78e5e243a00749433f3a28f47ff958be9b1bd42b8690a0a4f"
	I1202 20:47:43.241501  180709 cri.go:89] found id: "5ab2b69576bd7ddc7b9834385c7475de654c082386d224fb389bea4e68f3c384"
	I1202 20:47:43.241506  180709 cri.go:89] found id: "063bdc2f4044d47e558e971c8d8742aec22bec89182a48c14bc4dc181c60a531"
	I1202 20:47:43.241510  180709 cri.go:89] found id: "9bbfedda04a70bdbc59f66ca20322b7bf1717ad77a590bbc7c2ce4242714ec5c"
	I1202 20:47:43.241514  180709 cri.go:89] found id: "c3ba4033625655ee75b7cdd32c8895e62e5f26321e371238b33d804ab1138926"
	I1202 20:47:43.241518  180709 cri.go:89] found id: "4eb3b7ec4b7d853bf9eb9a01676c24007457097a629f779a01fc49110e7cc47d"
	I1202 20:47:43.241523  180709 cri.go:89] found id: "7a076c19ae69f444d8beaca6206d51a7ea8266bb0ac74b038fb2531b733b0ed1"
	I1202 20:47:43.241527  180709 cri.go:89] found id: "bdb1b64ca24e08df0dda142abb2f57874f9cda21c9400ad109b3980d49353290"
	I1202 20:47:43.241531  180709 cri.go:89] found id: ""
	I1202 20:47:43.241581  180709 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-892862 -n pause-892862
helpers_test.go:269: (dbg) Run:  kubectl --context pause-892862 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (43.00s)

                                                
                                    

Test pass (377/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.53
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 9.87
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.17
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.11
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.16
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.68
31 TestOffline 55.57
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 131.77
40 TestAddons/serial/GCPAuth/Namespaces 0.16
41 TestAddons/serial/GCPAuth/FakeCredentials 11.52
44 TestAddons/parallel/Registry 18.21
45 TestAddons/parallel/RegistryCreds 0.65
47 TestAddons/parallel/InspektorGadget 11.68
48 TestAddons/parallel/MetricsServer 6.04
50 TestAddons/parallel/CSI 37.26
51 TestAddons/parallel/Headlamp 20.79
52 TestAddons/parallel/CloudSpanner 6.58
53 TestAddons/parallel/LocalPath 58.93
54 TestAddons/parallel/NvidiaDevicePlugin 6.92
55 TestAddons/parallel/Yakd 12.71
57 TestAddons/StoppedEnableDisable 88.3
58 TestCertOptions 58.22
59 TestCertExpiration 273.59
61 TestForceSystemdFlag 83.29
62 TestForceSystemdEnv 63.09
67 TestErrorSpam/setup 36.32
68 TestErrorSpam/start 0.34
69 TestErrorSpam/status 0.69
70 TestErrorSpam/pause 1.51
71 TestErrorSpam/unpause 1.73
72 TestErrorSpam/stop 85.68
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 56.63
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 31.22
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.12
83 TestFunctional/serial/CacheCmd/cache/add_remote 5.04
84 TestFunctional/serial/CacheCmd/cache/add_local 2.66
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 36.3
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.32
95 TestFunctional/serial/LogsFileCmd 1.31
96 TestFunctional/serial/InvalidService 4
98 TestFunctional/parallel/ConfigCmd 0.43
99 TestFunctional/parallel/DashboardCmd 17.68
100 TestFunctional/parallel/DryRun 0.24
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.75
106 TestFunctional/parallel/ServiceCmdConnect 20.45
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 44.42
110 TestFunctional/parallel/SSHCmd 0.32
111 TestFunctional/parallel/CpCmd 1.12
112 TestFunctional/parallel/MySQL 23.49
113 TestFunctional/parallel/FileSync 0.18
114 TestFunctional/parallel/CertSync 1.1
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
122 TestFunctional/parallel/License 0.9
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
127 TestFunctional/parallel/ImageCommands/ImageBuild 6.37
128 TestFunctional/parallel/ImageCommands/Setup 1.75
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.18
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.24
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.1
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
148 TestFunctional/parallel/ServiceCmd/DeployApp 15.17
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
150 TestFunctional/parallel/ProfileCmd/profile_list 0.37
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
152 TestFunctional/parallel/MountCmd/any-port 7.23
153 TestFunctional/parallel/MountCmd/specific-port 1.45
154 TestFunctional/parallel/ServiceCmd/List 1.25
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.04
156 TestFunctional/parallel/ServiceCmd/JSONOutput 1.33
157 TestFunctional/parallel/Version/short 0.06
158 TestFunctional/parallel/Version/components 0.61
159 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
160 TestFunctional/parallel/ServiceCmd/Format 0.28
161 TestFunctional/parallel/ServiceCmd/URL 0.33
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 63.68
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 34.39
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.09
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 5
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.61
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 2.11
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 31.33
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.3
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.28
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.71
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 15.56
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.77
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 10.57
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.3
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 41.46
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.41
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.29
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 26.5
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.25
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.22
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.08
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.42
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.94
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 10.19
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.48
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 9.92
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.36
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.35
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.58
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.91
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.9
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.13
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.48
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.23
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.27
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.27
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.28
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.25
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 10.86
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.85
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.34
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.35
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 2.37
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.34
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.08
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.84
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 3.13
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.65
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.52
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.64
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.58
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 235.75
262 TestMultiControlPlane/serial/DeployApp 6.98
263 TestMultiControlPlane/serial/PingHostFromPods 1.37
264 TestMultiControlPlane/serial/AddWorkerNode 45.67
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.71
267 TestMultiControlPlane/serial/CopyFile 10.87
268 TestMultiControlPlane/serial/StopSecondaryNode 74.93
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
270 TestMultiControlPlane/serial/RestartSecondaryNode 35.04
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 369.91
273 TestMultiControlPlane/serial/DeleteSecondaryNode 18.11
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
275 TestMultiControlPlane/serial/StopCluster 255.45
276 TestMultiControlPlane/serial/RestartCluster 99.51
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
278 TestMultiControlPlane/serial/AddSecondaryNode 76.53
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.7
284 TestJSONOutput/start/Command 52.69
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.71
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.61
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.93
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.24
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 77.25
316 TestMountStart/serial/StartWithMountFirst 19.41
317 TestMountStart/serial/VerifyMountFirst 0.32
318 TestMountStart/serial/StartWithMountSecond 19.97
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.7
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.25
323 TestMountStart/serial/RestartStopped 19.02
324 TestMountStart/serial/VerifyMountPostStop 0.31
327 TestMultiNode/serial/FreshStart2Nodes 96.68
328 TestMultiNode/serial/DeployApp2Nodes 6.2
329 TestMultiNode/serial/PingHostFrom2Pods 0.88
330 TestMultiNode/serial/AddNode 42.6
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.47
333 TestMultiNode/serial/CopyFile 6.17
334 TestMultiNode/serial/StopNode 2.23
335 TestMultiNode/serial/StartAfterStop 39.75
336 TestMultiNode/serial/RestartKeepsNodes 298.57
337 TestMultiNode/serial/DeleteNode 2.71
338 TestMultiNode/serial/StopMultiNode 162.01
339 TestMultiNode/serial/RestartMultiNode 81.97
340 TestMultiNode/serial/ValidateNameConflict 43.93
347 TestScheduledStopUnix 108.83
351 TestRunningBinaryUpgrade 379.56
353 TestKubernetesUpgrade 154.31
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
357 TestNoKubernetes/serial/StartWithK8s 98.49
358 TestNoKubernetes/serial/StartWithStopK8s 30.44
359 TestNoKubernetes/serial/Start 19.93
360 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
361 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
362 TestNoKubernetes/serial/ProfileList 0.87
363 TestNoKubernetes/serial/Stop 1.32
364 TestNoKubernetes/serial/StartNoArgs 53.09
365 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
373 TestNetworkPlugins/group/false 3.95
377 TestStoppedBinaryUpgrade/Setup 3.3
378 TestStoppedBinaryUpgrade/Upgrade 76.98
379 TestISOImage/Setup 29.58
388 TestPause/serial/Start 64.01
389 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
390 TestNetworkPlugins/group/auto/Start 80.91
392 TestISOImage/Binaries/crictl 0.17
393 TestISOImage/Binaries/curl 0.17
394 TestISOImage/Binaries/docker 0.18
395 TestISOImage/Binaries/git 0.2
396 TestISOImage/Binaries/iptables 0.17
397 TestISOImage/Binaries/podman 0.19
398 TestISOImage/Binaries/rsync 0.17
399 TestISOImage/Binaries/socat 0.2
400 TestISOImage/Binaries/wget 0.18
401 TestISOImage/Binaries/VBoxControl 0.17
402 TestISOImage/Binaries/VBoxService 0.16
403 TestNetworkPlugins/group/kindnet/Start 98.47
405 TestNetworkPlugins/group/auto/KubeletFlags 0.2
406 TestNetworkPlugins/group/auto/NetCatPod 11.26
407 TestNetworkPlugins/group/auto/DNS 0.19
408 TestNetworkPlugins/group/auto/Localhost 0.14
409 TestNetworkPlugins/group/auto/HairPin 0.13
410 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
411 TestNetworkPlugins/group/calico/Start 83.77
412 TestNetworkPlugins/group/custom-flannel/Start 90.77
413 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
414 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
415 TestNetworkPlugins/group/kindnet/DNS 0.15
416 TestNetworkPlugins/group/kindnet/Localhost 0.12
417 TestNetworkPlugins/group/kindnet/HairPin 0.12
418 TestNetworkPlugins/group/enable-default-cni/Start 69.17
419 TestNetworkPlugins/group/calico/ControllerPod 6.01
420 TestNetworkPlugins/group/calico/KubeletFlags 0.19
421 TestNetworkPlugins/group/calico/NetCatPod 10.41
422 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
423 TestNetworkPlugins/group/flannel/Start 73.71
424 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
425 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
426 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
427 TestNetworkPlugins/group/calico/DNS 0.18
428 TestNetworkPlugins/group/calico/Localhost 0.15
429 TestNetworkPlugins/group/calico/HairPin 0.13
430 TestNetworkPlugins/group/custom-flannel/DNS 0.18
431 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
432 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
433 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
434 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
435 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
436 TestNetworkPlugins/group/bridge/Start 60.17
438 TestStartStop/group/old-k8s-version/serial/FirstStart 71.53
440 TestStartStop/group/no-preload/serial/FirstStart 102.32
441 TestNetworkPlugins/group/flannel/ControllerPod 6.01
442 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
443 TestNetworkPlugins/group/flannel/NetCatPod 10.24
444 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
445 TestNetworkPlugins/group/bridge/NetCatPod 11.31
446 TestNetworkPlugins/group/flannel/DNS 0.16
447 TestNetworkPlugins/group/flannel/Localhost 0.13
448 TestNetworkPlugins/group/flannel/HairPin 0.2
449 TestNetworkPlugins/group/bridge/DNS 0.15
450 TestNetworkPlugins/group/bridge/Localhost 0.15
451 TestNetworkPlugins/group/bridge/HairPin 0.13
452 TestStartStop/group/old-k8s-version/serial/DeployApp 10.4
454 TestStartStop/group/embed-certs/serial/FirstStart 57.76
456 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.55
457 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
458 TestStartStop/group/old-k8s-version/serial/Stop 85.08
459 TestStartStop/group/no-preload/serial/DeployApp 11.3
460 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
461 TestStartStop/group/no-preload/serial/Stop 78.66
462 TestStartStop/group/embed-certs/serial/DeployApp 11.27
463 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
464 TestStartStop/group/embed-certs/serial/Stop 87.44
465 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
466 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
467 TestStartStop/group/default-k8s-diff-port/serial/Stop 82.8
468 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
469 TestStartStop/group/old-k8s-version/serial/SecondStart 41.57
470 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
471 TestStartStop/group/no-preload/serial/SecondStart 53.82
472 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
473 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
474 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.19
475 TestStartStop/group/old-k8s-version/serial/Pause 2.68
477 TestStartStop/group/newest-cni/serial/FirstStart 57.82
478 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
479 TestStartStop/group/embed-certs/serial/SecondStart 66.21
480 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
481 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 69.35
482 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
483 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
484 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
485 TestStartStop/group/no-preload/serial/Pause 2.95
487 TestISOImage/PersistentMounts//data 0.17
488 TestISOImage/PersistentMounts//var/lib/docker 0.19
489 TestISOImage/PersistentMounts//var/lib/cni 0.19
490 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
491 TestISOImage/PersistentMounts//var/lib/minikube 0.17
492 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
493 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
494 TestISOImage/VersionJSON 0.21
495 TestISOImage/eBPFSupport 0.19
496 TestStartStop/group/newest-cni/serial/DeployApp 0
497 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.29
498 TestStartStop/group/newest-cni/serial/Stop 8.33
499 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
500 TestStartStop/group/newest-cni/serial/SecondStart 45.41
501 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.01
502 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
503 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7
504 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
505 TestStartStop/group/embed-certs/serial/Pause 2.91
506 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
507 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.37
508 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.59
509 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
510 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
511 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.19
512 TestStartStop/group/newest-cni/serial/Pause 2.3
x
+
TestDownloadOnly/v1.28.0/json-events (22.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-104648 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-104648 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.526966229s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1202 19:44:43.728554  147070 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1202 19:44:43.728675  147070 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-104648
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-104648: exit status 85 (78.257371ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-104648 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-104648 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:44:21
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:44:21.255707  147083 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:44:21.255822  147083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:44:21.255830  147083 out.go:374] Setting ErrFile to fd 2...
	I1202 19:44:21.255834  147083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:44:21.256229  147083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	W1202 19:44:21.256345  147083 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21997-143119/.minikube/config/config.json: open /home/jenkins/minikube-integration/21997-143119/.minikube/config/config.json: no such file or directory
	I1202 19:44:21.256798  147083 out.go:368] Setting JSON to true
	I1202 19:44:21.257629  147083 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5205,"bootTime":1764699456,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:44:21.257709  147083 start.go:143] virtualization: kvm guest
	I1202 19:44:21.261060  147083 out.go:99] [download-only-104648] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1202 19:44:21.261188  147083 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball: no such file or directory
	I1202 19:44:21.261221  147083 notify.go:221] Checking for updates...
	I1202 19:44:21.262488  147083 out.go:171] MINIKUBE_LOCATION=21997
	I1202 19:44:21.263646  147083 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:44:21.264925  147083 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 19:44:21.266178  147083 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:44:21.267336  147083 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 19:44:21.269306  147083 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 19:44:21.269572  147083 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:44:21.803232  147083 out.go:99] Using the kvm2 driver based on user configuration
	I1202 19:44:21.803275  147083 start.go:309] selected driver: kvm2
	I1202 19:44:21.803285  147083 start.go:927] validating driver "kvm2" against <nil>
	I1202 19:44:21.803755  147083 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 19:44:21.804404  147083 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1202 19:44:21.804594  147083 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 19:44:21.804640  147083 cni.go:84] Creating CNI manager for ""
	I1202 19:44:21.804697  147083 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 19:44:21.804712  147083 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 19:44:21.804780  147083 start.go:353] cluster config:
	{Name:download-only-104648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-104648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:44:21.805035  147083 iso.go:125] acquiring lock: {Name:mkfe4a75ba73b1e7a1c7cd55dc23a305917e17a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:44:21.806649  147083 out.go:99] Downloading VM boot image ...
	I1202 19:44:21.806703  147083 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21997-143119/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1202 19:44:31.641343  147083 out.go:99] Starting "download-only-104648" primary control-plane node in "download-only-104648" cluster
	I1202 19:44:31.641398  147083 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 19:44:31.731624  147083 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1202 19:44:31.731693  147083 cache.go:65] Caching tarball of preloaded images
	I1202 19:44:31.731874  147083 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1202 19:44:31.733791  147083 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1202 19:44:31.733814  147083 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1202 19:44:31.834334  147083 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1202 19:44:31.834462  147083 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-104648 host does not exist
	  To start a cluster, run: "minikube start -p download-only-104648"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-104648
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-153154 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-153154 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.868584119s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1202 19:44:53.989120  147070 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1202 19:44:53.989163  147070 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-153154
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-153154: exit status 85 (76.412103ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-104648 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-104648 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │ 02 Dec 25 19:44 UTC │
	│ delete  │ -p download-only-104648                                                                                                                                                 │ download-only-104648 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │ 02 Dec 25 19:44 UTC │
	│ start   │ -o=json --download-only -p download-only-153154 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-153154 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:44:44
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:44:44.175805  147342 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:44:44.176076  147342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:44:44.176086  147342 out.go:374] Setting ErrFile to fd 2...
	I1202 19:44:44.176091  147342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:44:44.176343  147342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 19:44:44.176909  147342 out.go:368] Setting JSON to true
	I1202 19:44:44.177806  147342 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5228,"bootTime":1764699456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:44:44.177868  147342 start.go:143] virtualization: kvm guest
	I1202 19:44:44.180042  147342 out.go:99] [download-only-153154] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 19:44:44.180257  147342 notify.go:221] Checking for updates...
	I1202 19:44:44.181644  147342 out.go:171] MINIKUBE_LOCATION=21997
	I1202 19:44:44.182874  147342 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:44:44.184071  147342 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 19:44:44.185311  147342 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:44:44.186590  147342 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1202 19:44:44.188967  147342 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1202 19:44:44.189195  147342 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:44:44.221779  147342 out.go:99] Using the kvm2 driver based on user configuration
	I1202 19:44:44.221811  147342 start.go:309] selected driver: kvm2
	I1202 19:44:44.221817  147342 start.go:927] validating driver "kvm2" against <nil>
	I1202 19:44:44.222121  147342 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1202 19:44:44.222579  147342 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1202 19:44:44.222769  147342 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1202 19:44:44.222798  147342 cni.go:84] Creating CNI manager for ""
	I1202 19:44:44.222842  147342 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1202 19:44:44.222851  147342 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1202 19:44:44.222902  147342 start.go:353] cluster config:
	{Name:download-only-153154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-153154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:44:44.222989  147342 iso.go:125] acquiring lock: {Name:mkfe4a75ba73b1e7a1c7cd55dc23a305917e17a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1202 19:44:44.224506  147342 out.go:99] Starting "download-only-153154" primary control-plane node in "download-only-153154" cluster
	I1202 19:44:44.224528  147342 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:44:44.815232  147342 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1202 19:44:44.815273  147342 cache.go:65] Caching tarball of preloaded images
	I1202 19:44:44.815458  147342 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1202 19:44:44.817192  147342 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1202 19:44:44.817218  147342 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1202 19:44:44.925536  147342 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1202 19:44:44.925638  147342 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/21997-143119/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-153154 host does not exist
	  To start a cluster, run: "minikube start -p download-only-153154"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-153154
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-951847 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-951847 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.107149793s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
--- PASS: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
--- PASS: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-951847
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-951847: exit status 85 (75.81016ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-104648 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-104648 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │ 02 Dec 25 19:44 UTC │
	│ delete  │ -p download-only-104648                                                                                                                                                        │ download-only-104648 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │ 02 Dec 25 19:44 UTC │
	│ start   │ -o=json --download-only -p download-only-153154 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-153154 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │ 02 Dec 25 19:44 UTC │
	│ delete  │ -p download-only-153154                                                                                                                                                        │ download-only-153154 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │ 02 Dec 25 19:44 UTC │
	│ start   │ -o=json --download-only -p download-only-951847 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-951847 │ jenkins │ v1.37.0 │ 02 Dec 25 19:44 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/02 19:44:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1202 19:44:54.445014  147538 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:44:54.445295  147538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:44:54.445305  147538 out.go:374] Setting ErrFile to fd 2...
	I1202 19:44:54.445311  147538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:44:54.445493  147538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 19:44:54.446049  147538 out.go:368] Setting JSON to true
	I1202 19:44:54.446888  147538 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5238,"bootTime":1764699456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:44:54.446950  147538 start.go:143] virtualization: kvm guest
	I1202 19:44:54.448930  147538 out.go:99] [download-only-951847] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 19:44:54.449164  147538 notify.go:221] Checking for updates...
	I1202 19:44:54.450393  147538 out.go:171] MINIKUBE_LOCATION=21997
	I1202 19:44:54.451730  147538 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:44:54.452959  147538 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 19:44:54.454132  147538 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:44:54.455466  147538 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-951847 host does not exist
	  To start a cluster, run: "minikube start -p download-only-951847"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-951847
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
I1202 19:44:58.993283  147070 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-718788 --alsologtostderr --binary-mirror http://127.0.0.1:37387 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-718788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-718788
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (55.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-847311 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-847311 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (54.456925991s)
helpers_test.go:175: Cleaning up "offline-crio-847311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-847311
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-847311: (1.109883054s)
--- PASS: TestOffline (55.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-375150
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-375150: exit status 85 (63.840896ms)

                                                
                                                
-- stdout --
	* Profile "addons-375150" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-375150"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-375150
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-375150: exit status 85 (64.51117ms)

                                                
                                                
-- stdout --
	* Profile "addons-375150" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-375150"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (131.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-375150 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-375150 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m11.771744828s)
--- PASS: TestAddons/Setup (131.77s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-375150 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-375150 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-375150 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-375150 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a82993f9-890f-42b3-b87f-75109bc29419] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a82993f9-890f-42b3-b87f-75109bc29419] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004448849s
addons_test.go:694: (dbg) Run:  kubectl --context addons-375150 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-375150 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-375150 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.969949ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-9bdcd" [34983ae5-489c-4955-ad82-d37b7dc934c4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.021010363s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-ps4f7" [fe6a116e-5821-4880-9421-bf45cf7933a1] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004194548s
addons_test.go:392: (dbg) Run:  kubectl --context addons-375150 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-375150 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-375150 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.427472173s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 ip
2025/12/02 19:47:49 [DEBUG] GET http://192.168.39.62:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.21s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 6.523791ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-375150
addons_test.go:332: (dbg) Run:  kubectl --context addons-375150 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-t5srt" [268e44f7-be00-45db-bd24-62098690df25] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004589364s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-375150 addons disable inspektor-gadget --alsologtostderr -v=1: (5.670075861s)
--- PASS: TestAddons/parallel/InspektorGadget (11.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 7.695609ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-hb8r4" [0885c588-1b60-4aa7-b0f1-0315f1089034] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004018879s
addons_test.go:463: (dbg) Run:  kubectl --context addons-375150 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1202 19:47:50.028401  147070 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 19:47:50.039319  147070 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 19:47:50.039346  147070 kapi.go:107] duration metric: took 10.968185ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 10.98008ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-375150 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-375150 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [545f963a-2725-4d0e-8669-4a23b8cb7c4a] Pending
helpers_test.go:352: "task-pv-pod" [545f963a-2725-4d0e-8669-4a23b8cb7c4a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [545f963a-2725-4d0e-8669-4a23b8cb7c4a] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004406073s
addons_test.go:572: (dbg) Run:  kubectl --context addons-375150 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-375150 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-375150 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-375150 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-375150 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-375150 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-375150 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [0a8a89f6-8689-411c-826f-b1b25570175a] Pending
helpers_test.go:352: "task-pv-pod-restore" [0a8a89f6-8689-411c-826f-b1b25570175a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [0a8a89f6-8689-411c-826f-b1b25570175a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004836402s
addons_test.go:614: (dbg) Run:  kubectl --context addons-375150 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-375150 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-375150 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-375150 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.86613133s)
--- PASS: TestAddons/parallel/CSI (37.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-375150 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-rq5kr" [9edea840-3958-4c4e-930f-0906ac66dde5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-rq5kr" [9edea840-3958-4c4e-930f-0906ac66dde5] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004052703s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-375150 addons disable headlamp --alsologtostderr -v=1: (5.90466077s)
--- PASS: TestAddons/parallel/Headlamp (20.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-6ld2q" [ebdcf5c6-740a-4dd5-bbad-4a9474e2d0f6] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004306085s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-375150 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-375150 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-375150 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [39755217-d6ae-4c95-9293-d6a662d7e1de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [39755217-d6ae-4c95-9293-d6a662d7e1de] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [39755217-d6ae-4c95-9293-d6a662d7e1de] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.005918814s
addons_test.go:967: (dbg) Run:  kubectl --context addons-375150 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 ssh "cat /opt/local-path-provisioner/pvc-4b45ee50-01bc-49df-9618-d88b3acdefc4_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-375150 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-375150 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-375150 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.89258936s)
--- PASS: TestAddons/parallel/LocalPath (58.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.92s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ndzsk" [eef114d3-d8df-4458-a3b5-b2bb9455b793] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.009123784s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.92s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8bdsl" [932f720c-0352-4a6e-9e8a-840a82d89c64] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005901777s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-375150 addons disable yakd --alsologtostderr -v=1: (6.703494298s)
--- PASS: TestAddons/parallel/Yakd (12.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (88.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-375150
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-375150: (1m28.088882546s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-375150
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-375150
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-375150
--- PASS: TestAddons/StoppedEnableDisable (88.30s)

                                                
                                    
x
+
TestCertOptions (58.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-282881 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-282881 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (56.782965439s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-282881 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-282881 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-282881 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-282881" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-282881
--- PASS: TestCertOptions (58.22s)

                                                
                                    
x
+
TestCertExpiration (273.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-095611 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-095611 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m1.592887173s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-095611 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-095611 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.130498804s)
helpers_test.go:175: Cleaning up "cert-expiration-095611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-095611
--- PASS: TestCertExpiration (273.59s)

                                                
                                    
x
+
TestForceSystemdFlag (83.29s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-034910 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-034910 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m22.114293225s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-034910 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-034910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-034910
--- PASS: TestForceSystemdFlag (83.29s)

                                                
                                    
x
+
TestForceSystemdEnv (63.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-111873 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-111873 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m2.187661187s)
helpers_test.go:175: Cleaning up "force-systemd-env-111873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-111873
--- PASS: TestForceSystemdEnv (63.09s)

                                                
                                    
x
+
TestErrorSpam/setup (36.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-405873 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-405873 --driver=kvm2  --container-runtime=crio
E1202 19:52:12.175057  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:12.181524  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:12.193037  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:12.214465  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:12.255966  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:12.337476  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:12.499126  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:12.820877  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:13.462985  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:14.744898  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:17.307883  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:22.429239  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:52:32.671528  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-405873 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-405873 --driver=kvm2  --container-runtime=crio: (36.320865475s)
--- PASS: TestErrorSpam/setup (36.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (85.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 stop
E1202 19:52:53.153004  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 19:53:34.115961  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 stop: (1m22.879557444s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 stop: (1.291133399s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-405873 --log_dir /tmp/nospam-405873 stop: (1.512552822s)
--- PASS: TestErrorSpam/stop (85.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/test/nested/copy/147070/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945181 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1202 19:54:56.038148  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-945181 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (56.625440794s)
--- PASS: TestFunctional/serial/StartWithProxy (56.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1202 19:55:08.345608  147070 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945181 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-945181 --alsologtostderr -v=8: (31.221277977s)
functional_test.go:678: soft start took 31.222040694s for "functional-945181" cluster.
I1202 19:55:39.567309  147070 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (31.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-945181 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 cache add registry.k8s.io/pause:3.1: (1.655815285s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 cache add registry.k8s.io/pause:3.3: (1.726454265s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 cache add registry.k8s.io/pause:latest: (1.654438141s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-945181 /tmp/TestFunctionalserialCacheCmdcacheadd_local3104709408/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 cache add minikube-local-cache-test:functional-945181
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 cache add minikube-local-cache-test:functional-945181: (2.295828467s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 cache delete minikube-local-cache-test:functional-945181
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-945181
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (175.8199ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 cache reload: (1.509275408s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 kubectl -- --context functional-945181 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-945181 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945181 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-945181 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.303943356s)
functional_test.go:776: restart took 36.304068501s for "functional-945181" cluster.
I1202 19:56:26.524495  147070 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (36.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-945181 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 logs: (1.322545007s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 logs --file /tmp/TestFunctionalserialLogsFileCmd2266070607/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 logs --file /tmp/TestFunctionalserialLogsFileCmd2266070607/001/logs.txt: (1.304557333s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-945181 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-945181
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-945181: exit status 115 (231.004213ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.106:30755 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-945181 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.00s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 config get cpus: exit status 14 (66.517905ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 config get cpus: exit status 14 (67.621279ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-945181 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-945181 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 153469: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.68s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945181 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-945181 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (126.49097ms)

                                                
                                                
-- stdout --
	* [functional-945181] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:56:58.072481  153403 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:56:58.072773  153403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:56:58.072784  153403 out.go:374] Setting ErrFile to fd 2...
	I1202 19:56:58.072788  153403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:56:58.073039  153403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 19:56:58.073576  153403 out.go:368] Setting JSON to false
	I1202 19:56:58.074720  153403 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5962,"bootTime":1764699456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:56:58.074800  153403 start.go:143] virtualization: kvm guest
	I1202 19:56:58.076714  153403 out.go:179] * [functional-945181] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 19:56:58.078766  153403 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 19:56:58.078762  153403 notify.go:221] Checking for updates...
	I1202 19:56:58.083255  153403 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:56:58.084597  153403 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 19:56:58.086286  153403 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:56:58.087684  153403 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 19:56:58.088951  153403 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:56:58.090647  153403 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:56:58.091369  153403 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:56:58.125585  153403 out.go:179] * Using the kvm2 driver based on existing profile
	I1202 19:56:58.126763  153403 start.go:309] selected driver: kvm2
	I1202 19:56:58.126786  153403 start.go:927] validating driver "kvm2" against &{Name:functional-945181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-945181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:56:58.126939  153403 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:56:58.129332  153403 out.go:203] 
	W1202 19:56:58.130618  153403 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 19:56:58.131876  153403 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945181 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-945181 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-945181 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (121.187971ms)

                                                
                                                
-- stdout --
	* [functional-945181] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:56:57.952263  153383 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:56:57.952496  153383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:56:57.952506  153383 out.go:374] Setting ErrFile to fd 2...
	I1202 19:56:57.952510  153383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:56:57.952803  153383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 19:56:57.953246  153383 out.go:368] Setting JSON to false
	I1202 19:56:57.954052  153383 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5962,"bootTime":1764699456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:56:57.954114  153383 start.go:143] virtualization: kvm guest
	I1202 19:56:57.956148  153383 out.go:179] * [functional-945181] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 19:56:57.957408  153383 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 19:56:57.957431  153383 notify.go:221] Checking for updates...
	I1202 19:56:57.959667  153383 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:56:57.961036  153383 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 19:56:57.962441  153383 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:56:57.963999  153383 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 19:56:57.965327  153383 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:56:57.967132  153383 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 19:56:57.967623  153383 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:56:58.000028  153383 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1202 19:56:58.001188  153383 start.go:309] selected driver: kvm2
	I1202 19:56:58.001207  153383 start.go:927] validating driver "kvm2" against &{Name:functional-945181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-945181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:56:58.001341  153383 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:56:58.003790  153383 out.go:203] 
	W1202 19:56:58.005033  153383 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 19:56:58.006218  153383 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-945181 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-945181 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-jn27g" [c9cb8a15-f189-4215-a221-f5b539c75930] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-jn27g" [c9cb8a15-f189-4215-a221-f5b539c75930] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.005222324s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.106:31032
functional_test.go:1680: http://192.168.39.106:31032: success! body:
Request served by hello-node-connect-7d85dfc575-jn27g

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.106:31032
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.45s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f01cd9a8-0dba-4103-80d7-1c251d128d13] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004120584s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-945181 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-945181 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-945181 get pvc myclaim -o=json
I1202 19:56:40.092134  147070 retry.go:31] will retry after 2.249521848s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:d04440ae-eda4-44a6-aa57-9f7e8bc5d0df ResourceVersion:694 Generation:0 CreationTimestamp:2025-12-02 19:56:39 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0014390f0 VolumeMode:0xc001439100 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-945181 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-945181 apply -f testdata/storage-provisioner/pod.yaml
I1202 19:56:42.559967  147070 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d9442264-c7a5-4a2e-89a8-531f8a998b5a] Pending
helpers_test.go:352: "sp-pod" [d9442264-c7a5-4a2e-89a8-531f8a998b5a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d9442264-c7a5-4a2e-89a8-531f8a998b5a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.005049582s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-945181 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-945181 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-945181 delete -f testdata/storage-provisioner/pod.yaml: (1.24295468s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-945181 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4c5ddae3-42b8-4355-8377-63b4de1c930a] Pending
helpers_test.go:352: "sp-pod" [4c5ddae3-42b8-4355-8377-63b4de1c930a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4c5ddae3-42b8-4355-8377-63b4de1c930a] Running
2025/12/02 19:57:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004767853s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-945181 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.42s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh -n functional-945181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 cp functional-945181:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd613834414/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh -n functional-945181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh -n functional-945181 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-945181 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-854zl" [c67b876c-bf4c-46dc-bb56-bd8bd2b18e2e] Pending
helpers_test.go:352: "mysql-5bb876957f-854zl" [c67b876c-bf4c-46dc-bb56-bd8bd2b18e2e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-854zl" [c67b876c-bf4c-46dc-bb56-bd8bd2b18e2e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.004995912s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-945181 exec mysql-5bb876957f-854zl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-945181 exec mysql-5bb876957f-854zl -- mysql -ppassword -e "show databases;": exit status 1 (145.958477ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 19:56:54.014192  147070 retry.go:31] will retry after 761.224547ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-945181 exec mysql-5bb876957f-854zl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-945181 exec mysql-5bb876957f-854zl -- mysql -ppassword -e "show databases;": exit status 1 (116.967449ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 19:56:54.893264  147070 retry.go:31] will retry after 2.1150858s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-945181 exec mysql-5bb876957f-854zl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/147070/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo cat /etc/test/nested/copy/147070/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/147070.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo cat /etc/ssl/certs/147070.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/147070.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo cat /usr/share/ca-certificates/147070.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1470702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo cat /etc/ssl/certs/1470702.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1470702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo cat /usr/share/ca-certificates/1470702.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-945181 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 ssh "sudo systemctl is-active docker": exit status 1 (182.362987ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 ssh "sudo systemctl is-active containerd": exit status 1 (187.970351ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945181 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-945181
localhost/kicbase/echo-server:functional-945181
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945181 image ls --format short --alsologtostderr:
I1202 19:57:07.501384  153813 out.go:360] Setting OutFile to fd 1 ...
I1202 19:57:07.501701  153813 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:07.501712  153813 out.go:374] Setting ErrFile to fd 2...
I1202 19:57:07.501716  153813 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:07.502047  153813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 19:57:07.502692  153813 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:07.502807  153813 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:07.505305  153813 ssh_runner.go:195] Run: systemctl --version
I1202 19:57:07.507252  153813 main.go:143] libmachine: domain functional-945181 has defined MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:07.507768  153813 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d6:f6:42", ip: ""} in network mk-functional-945181: {Iface:virbr1 ExpiryTime:2025-12-02 20:54:26 +0000 UTC Type:0 Mac:52:54:00:d6:f6:42 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-945181 Clientid:01:52:54:00:d6:f6:42}
I1202 19:57:07.507797  153813 main.go:143] libmachine: domain functional-945181 has defined IP address 192.168.39.106 and MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:07.507984  153813 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-945181/id_rsa Username:docker}
I1202 19:57:07.610042  153813 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945181 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ localhost/minikube-local-cache-test     │ functional-945181  │ 9aac2ec33e58e │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-945181  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945181 image ls --format table --alsologtostderr:
I1202 19:57:08.256901  153866 out.go:360] Setting OutFile to fd 1 ...
I1202 19:57:08.257015  153866 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:08.257027  153866 out.go:374] Setting ErrFile to fd 2...
I1202 19:57:08.257034  153866 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:08.257256  153866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 19:57:08.257838  153866 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:08.257932  153866 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:08.260059  153866 ssh_runner.go:195] Run: systemctl --version
I1202 19:57:08.262898  153866 main.go:143] libmachine: domain functional-945181 has defined MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:08.263343  153866 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d6:f6:42", ip: ""} in network mk-functional-945181: {Iface:virbr1 ExpiryTime:2025-12-02 20:54:26 +0000 UTC Type:0 Mac:52:54:00:d6:f6:42 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-945181 Clientid:01:52:54:00:d6:f6:42}
I1202 19:57:08.263371  153866 main.go:143] libmachine: domain functional-945181 has defined IP address 192.168.39.106 and MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:08.263541  153866 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-945181/id_rsa Username:docker}
I1202 19:57:08.367912  153866 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945181 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e
1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0
b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-945181"],"size":"4943877"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02
799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/k
ube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fc
a08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9aac2ec33e58efb0f735c272e0b004133820665034a8525378075097a36f14e8","repoDigests":["localhost/minikube-local-cache-test@sha256:5e4f7f3c71c2b01315b8f0a9f9901b695d1af7b36c6c703c48a24685
0f6d3e0e"],"repoTags":["localhost/minikube-local-cache-test:functional-945181"],"size":"3330"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945181 image ls --format json --alsologtostderr:
I1202 19:57:07.957322  153855 out.go:360] Setting OutFile to fd 1 ...
I1202 19:57:07.957430  153855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:07.957442  153855 out.go:374] Setting ErrFile to fd 2...
I1202 19:57:07.957448  153855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:07.957675  153855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 19:57:07.958260  153855 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:07.958372  153855 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:07.960764  153855 ssh_runner.go:195] Run: systemctl --version
I1202 19:57:07.963479  153855 main.go:143] libmachine: domain functional-945181 has defined MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:07.964129  153855 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d6:f6:42", ip: ""} in network mk-functional-945181: {Iface:virbr1 ExpiryTime:2025-12-02 20:54:26 +0000 UTC Type:0 Mac:52:54:00:d6:f6:42 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-945181 Clientid:01:52:54:00:d6:f6:42}
I1202 19:57:07.964157  153855 main.go:143] libmachine: domain functional-945181 has defined IP address 192.168.39.106 and MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:07.964321  153855 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-945181/id_rsa Username:docker}
I1202 19:57:08.079327  153855 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945181 image ls --format yaml --alsologtostderr:
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 9aac2ec33e58efb0f735c272e0b004133820665034a8525378075097a36f14e8
repoDigests:
- localhost/minikube-local-cache-test@sha256:5e4f7f3c71c2b01315b8f0a9f9901b695d1af7b36c6c703c48a246850f6d3e0e
repoTags:
- localhost/minikube-local-cache-test:functional-945181
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-945181
size: "4943877"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945181 image ls --format yaml --alsologtostderr:
I1202 19:57:07.745998  153834 out.go:360] Setting OutFile to fd 1 ...
I1202 19:57:07.746112  153834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:07.746130  153834 out.go:374] Setting ErrFile to fd 2...
I1202 19:57:07.746136  153834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:07.746361  153834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 19:57:07.747078  153834 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:07.747222  153834 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:07.749688  153834 ssh_runner.go:195] Run: systemctl --version
I1202 19:57:07.751878  153834 main.go:143] libmachine: domain functional-945181 has defined MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:07.752254  153834 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d6:f6:42", ip: ""} in network mk-functional-945181: {Iface:virbr1 ExpiryTime:2025-12-02 20:54:26 +0000 UTC Type:0 Mac:52:54:00:d6:f6:42 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-945181 Clientid:01:52:54:00:d6:f6:42}
I1202 19:57:07.752280  153834 main.go:143] libmachine: domain functional-945181 has defined IP address 192.168.39.106 and MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:07.752400  153834 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-945181/id_rsa Username:docker}
I1202 19:57:07.838398  153834 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 ssh pgrep buildkitd: exit status 1 (195.920678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image build -t localhost/my-image:functional-945181 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 image build -t localhost/my-image:functional-945181 testdata/build --alsologtostderr: (5.989202834s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-945181 image build -t localhost/my-image:functional-945181 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 339647254cf
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-945181
--> 692ae8b23c1
Successfully tagged localhost/my-image:functional-945181
692ae8b23c14df43a21547ab9bf4fc008a22cfaf99c3d901df66c663a78a69d6
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-945181 image build -t localhost/my-image:functional-945181 testdata/build --alsologtostderr:
I1202 19:57:07.775689  153843 out.go:360] Setting OutFile to fd 1 ...
I1202 19:57:07.776008  153843 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:07.776019  153843 out.go:374] Setting ErrFile to fd 2...
I1202 19:57:07.776024  153843 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 19:57:07.776280  153843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 19:57:07.776928  153843 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:07.777630  153843 config.go:182] Loaded profile config "functional-945181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1202 19:57:07.779739  153843 ssh_runner.go:195] Run: systemctl --version
I1202 19:57:07.782576  153843 main.go:143] libmachine: domain functional-945181 has defined MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:07.783047  153843 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d6:f6:42", ip: ""} in network mk-functional-945181: {Iface:virbr1 ExpiryTime:2025-12-02 20:54:26 +0000 UTC Type:0 Mac:52:54:00:d6:f6:42 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:functional-945181 Clientid:01:52:54:00:d6:f6:42}
I1202 19:57:07.783074  153843 main.go:143] libmachine: domain functional-945181 has defined IP address 192.168.39.106 and MAC address 52:54:00:d6:f6:42 in network mk-functional-945181
I1202 19:57:07.783258  153843 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-945181/id_rsa Username:docker}
I1202 19:57:07.864437  153843 build_images.go:162] Building image from path: /tmp/build.2623783066.tar
I1202 19:57:07.864514  153843 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 19:57:07.881896  153843 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2623783066.tar
I1202 19:57:07.892652  153843 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2623783066.tar: stat -c "%s %y" /var/lib/minikube/build/build.2623783066.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2623783066.tar': No such file or directory
I1202 19:57:07.892705  153843 ssh_runner.go:362] scp /tmp/build.2623783066.tar --> /var/lib/minikube/build/build.2623783066.tar (3072 bytes)
I1202 19:57:07.969556  153843 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2623783066
I1202 19:57:08.006932  153843 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2623783066 -xf /var/lib/minikube/build/build.2623783066.tar
I1202 19:57:08.043329  153843 crio.go:315] Building image: /var/lib/minikube/build/build.2623783066
I1202 19:57:08.043414  153843 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-945181 /var/lib/minikube/build/build.2623783066 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1202 19:57:13.666600  153843 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-945181 /var/lib/minikube/build/build.2623783066 --cgroup-manager=cgroupfs: (5.623146925s)
I1202 19:57:13.666725  153843 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2623783066
I1202 19:57:13.682246  153843 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2623783066.tar
I1202 19:57:13.695138  153843 build_images.go:218] Built localhost/my-image:functional-945181 from /tmp/build.2623783066.tar
I1202 19:57:13.695177  153843 build_images.go:134] succeeded building to: functional-945181
I1202 19:57:13.695182  153843 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.72621562s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-945181
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image load --daemon kicbase/echo-server:functional-945181 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 image load --daemon kicbase/echo-server:functional-945181 --alsologtostderr: (1.198095525s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image load --daemon kicbase/echo-server:functional-945181 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-945181
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image load --daemon kicbase/echo-server:functional-945181 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image save kicbase/echo-server:functional-945181 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 image save kicbase/echo-server:functional-945181 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (7.243186856s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image rm kicbase/echo-server:functional-945181 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-945181
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 image save --daemon kicbase/echo-server:functional-945181 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-945181
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-945181 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-945181 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-qm87q" [58bb7434-4f4c-4c82-98fd-45bb8cf51999] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-qm87q" [58bb7434-4f4c-4c82-98fd-45bb8cf51999] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.008196477s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "299.782913ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.400249ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "261.305166ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.673379ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdany-port3800282489/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764705416574297059" to /tmp/TestFunctionalparallelMountCmdany-port3800282489/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764705416574297059" to /tmp/TestFunctionalparallelMountCmdany-port3800282489/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764705416574297059" to /tmp/TestFunctionalparallelMountCmdany-port3800282489/001/test-1764705416574297059
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (153.51462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:56:56.728097  147070 retry.go:31] will retry after 745.862044ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 19:56 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 19:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 19:56 test-1764705416574297059
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh cat /mount-9p/test-1764705416574297059
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-945181 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0176949d-b03f-481f-9824-a10963b91d03] Pending
helpers_test.go:352: "busybox-mount" [0176949d-b03f-481f-9824-a10963b91d03] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0176949d-b03f-481f-9824-a10963b91d03] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0176949d-b03f-481f-9824-a10963b91d03] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003267403s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-945181 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdany-port3800282489/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdspecific-port4074318796/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.435912ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:57:04.019253  147070 retry.go:31] will retry after 476.776284ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdspecific-port4074318796/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "sudo umount -f /mount-9p"
I1202 19:57:05.090787  147070 detect.go:223] nested VM detected
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 ssh "sudo umount -f /mount-9p": exit status 1 (193.759543ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-945181 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdspecific-port4074318796/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 service list: (1.250673151s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1638553993/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1638553993/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1638553993/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T" /mount1: exit status 1 (207.977227ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:57:05.470581  147070 retry.go:31] will retry after 297.065168ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-945181 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1638553993/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1638553993/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-945181 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1638553993/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-945181 service list -o json: (1.330481361s)
functional_test.go:1504: Took "1.330580708s" to run "out/minikube-linux-amd64 -p functional-945181 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 version -o=json --components
E1202 19:57:12.168516  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.106:31946
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-945181 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.106:31946
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-945181
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-945181
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-945181
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21997-143119/.minikube/files/etc/test/nested/copy/147070/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (63.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007973 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1202 19:57:39.884833  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-007973 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m3.674849801s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (63.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (34.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1202 19:58:23.936533  147070 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007973 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-007973 --alsologtostderr -v=8: (34.390569868s)
functional_test.go:678: soft start took 34.39096399s for "functional-007973" cluster.
I1202 19:58:58.327518  147070 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (34.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-007973 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 cache add registry.k8s.io/pause:3.1: (1.628360752s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 cache add registry.k8s.io/pause:3.3: (1.692927413s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 cache add registry.k8s.io/pause:latest: (1.675900686s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (5.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3439570509/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 cache add minikube-local-cache-test:functional-007973
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 cache add minikube-local-cache-test:functional-007973: (2.295258464s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 cache delete minikube-local-cache-test:functional-007973
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-007973
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (183.766654ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 cache reload: (1.520421316s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 kubectl -- --context functional-007973 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-007973 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (31.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007973 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-007973 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.3300368s)
functional_test.go:776: restart took 31.330155147s for "functional-007973" cluster.
I1202 19:59:40.202851  147070 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (31.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-007973 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 logs: (1.298048541s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs872897598/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs872897598/001/logs.txt: (1.27791552s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-007973 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-007973
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-007973: exit status 115 (234.90981ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.38:30353 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-007973 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-007973 delete -f testdata/invalidsvc.yaml: (1.246187891s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 config get cpus: exit status 14 (83.765432ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 config get cpus: exit status 14 (67.247097ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (15.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-007973 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-007973 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 155573: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (15.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007973 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-007973 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (117.19133ms)

                                                
                                                
-- stdout --
	* [functional-007973] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:59:50.113411  155513 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:59:50.113736  155513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:59:50.113746  155513 out.go:374] Setting ErrFile to fd 2...
	I1202 19:59:50.113751  155513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:59:50.113926  155513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 19:59:50.114342  155513 out.go:368] Setting JSON to false
	I1202 19:59:50.115192  155513 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6134,"bootTime":1764699456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:59:50.115251  155513 start.go:143] virtualization: kvm guest
	I1202 19:59:50.117114  155513 out.go:179] * [functional-007973] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 19:59:50.118627  155513 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 19:59:50.118638  155513 notify.go:221] Checking for updates...
	I1202 19:59:50.119804  155513 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:59:50.120967  155513 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 19:59:50.122158  155513 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:59:50.123217  155513 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 19:59:50.127886  155513 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:59:50.129823  155513 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:59:50.130337  155513 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:59:50.161844  155513 out.go:179] * Using the kvm2 driver based on existing profile
	I1202 19:59:50.163007  155513 start.go:309] selected driver: kvm2
	I1202 19:59:50.163026  155513 start.go:927] validating driver "kvm2" against &{Name:functional-007973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-007973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:59:50.163179  155513 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:59:50.165223  155513 out.go:203] 
	W1202 19:59:50.166436  155513 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1202 19:59:50.167831  155513 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007973 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-007973 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-007973 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (120.249825ms)

                                                
                                                
-- stdout --
	* [functional-007973] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 19:59:49.996062  155497 out.go:360] Setting OutFile to fd 1 ...
	I1202 19:59:49.996168  155497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:59:49.996180  155497 out.go:374] Setting ErrFile to fd 2...
	I1202 19:59:49.996187  155497 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 19:59:49.996493  155497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 19:59:49.997022  155497 out.go:368] Setting JSON to false
	I1202 19:59:49.998044  155497 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6134,"bootTime":1764699456,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 19:59:49.998109  155497 start.go:143] virtualization: kvm guest
	I1202 19:59:50.000335  155497 out.go:179] * [functional-007973] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1202 19:59:50.002272  155497 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 19:59:50.002353  155497 notify.go:221] Checking for updates...
	I1202 19:59:50.005341  155497 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 19:59:50.006767  155497 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 19:59:50.008282  155497 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 19:59:50.009776  155497 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 19:59:50.011132  155497 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 19:59:50.012824  155497 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 19:59:50.013329  155497 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 19:59:50.044600  155497 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1202 19:59:50.045938  155497 start.go:309] selected driver: kvm2
	I1202 19:59:50.045957  155497 start.go:927] validating driver "kvm2" against &{Name:functional-007973 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764169655-21974@sha256:5caa2df9c71885b15a10c4769bf4c9c00c1759c0d87b1a7e0b5b61285526245b Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-007973 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.38 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1202 19:59:50.046079  155497 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 19:59:50.048111  155497 out.go:203] 
	W1202 19:59:50.049209  155497 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1202 19:59:50.050351  155497 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-007973 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-007973 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-ztbh2" [970c8e10-0e10-464b-8614-096a95f68504] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-ztbh2" [970c8e10-0e10-464b-8614-096a95f68504] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00473326s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.38:32154
functional_test.go:1680: http://192.168.39.38:32154: success! body:
Request served by hello-node-connect-9f67c86d4-ztbh2

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.38:32154
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (41.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [aa105de2-50c3-4a31-bcab-91c86918e799] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006941184s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-007973 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-007973 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-007973 get pvc myclaim -o=json
I1202 20:00:06.059110  147070 retry.go:31] will retry after 2.950789115s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:48961400-8c7b-4b0b-84c9-b9b7585f6ae7 ResourceVersion:853 Generation:0 CreationTimestamp:2025-12-02 20:00:05 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b946c0 VolumeMode:0xc001b946d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-007973 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-007973 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a7684106-41bc-4a8c-ab5c-0e2e0090bfad] Pending
helpers_test.go:352: "sp-pod" [a7684106-41bc-4a8c-ab5c-0e2e0090bfad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a7684106-41bc-4a8c-ab5c-0e2e0090bfad] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004643762s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-007973 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-007973 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-007973 apply -f testdata/storage-provisioner/pod.yaml
I1202 20:00:34.088900  147070 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1d019785-dc5f-4d54-a838-529e5ab56717] Pending
helpers_test.go:352: "sp-pod" [1d019785-dc5f-4d54-a838-529e5ab56717] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1d019785-dc5f-4d54-a838-529e5ab56717] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00352517s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-007973 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (41.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh -n functional-007973 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 cp functional-007973:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp559296375/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh -n functional-007973 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh -n functional-007973 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (26.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-007973 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-tnbfx" [7753540f-90a3-4455-bb52-1bfff4eea967] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-tnbfx" [7753540f-90a3-4455-bb52-1bfff4eea967] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 23.020116638s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-007973 exec mysql-844cf969f6-tnbfx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-007973 exec mysql-844cf969f6-tnbfx -- mysql -ppassword -e "show databases;": exit status 1 (168.920818ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 20:00:25.868189  147070 retry.go:31] will retry after 1.078715041s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-007973 exec mysql-844cf969f6-tnbfx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-007973 exec mysql-844cf969f6-tnbfx -- mysql -ppassword -e "show databases;": exit status 1 (121.177531ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1202 20:00:27.068628  147070 retry.go:31] will retry after 1.768567264s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-007973 exec mysql-844cf969f6-tnbfx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (26.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/147070/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo cat /etc/test/nested/copy/147070/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/147070.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo cat /etc/ssl/certs/147070.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/147070.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo cat /usr/share/ca-certificates/147070.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1470702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo cat /etc/ssl/certs/1470702.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1470702.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo cat /usr/share/ca-certificates/1470702.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-007973 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 ssh "sudo systemctl is-active docker": exit status 1 (203.926867ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 ssh "sudo systemctl is-active containerd": exit status 1 (211.88622ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-007973 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-007973 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-424rl" [877ae0d7-1632-4381-859c-4ea1c0e20507] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-424rl" [877ae0d7-1632-4381-859c-4ea1c0e20507] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.007649067s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2614779801/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764705588030185969" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2614779801/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764705588030185969" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2614779801/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764705588030185969" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2614779801/001/test-1764705588030185969
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (198.969917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:59:48.229473  147070 retry.go:31] will retry after 288.003636ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  2 19:59 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  2 19:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  2 19:59 test-1764705588030185969
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh cat /mount-9p/test-1764705588030185969
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-007973 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e8ebf774-1ec2-463f-bd36-319856926346] Pending
helpers_test.go:352: "busybox-mount" [e8ebf774-1ec2-463f-bd36-319856926346] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e8ebf774-1ec2-463f-bd36-319856926346] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e8ebf774-1ec2-463f-bd36-319856926346] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004228241s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-007973 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2614779801/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "295.295926ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.663956ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "277.703741ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "76.026304ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo697885853/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (239.758198ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:59:58.189810  147070 retry.go:31] will retry after 537.735345ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo697885853/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 ssh "sudo umount -f /mount-9p": exit status 1 (195.58169ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-007973 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo697885853/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 service list -o json
functional_test.go:1504: Took "902.601239ms" to run "out/minikube-linux-amd64 -p functional-007973 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2020275693/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2020275693/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2020275693/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T" /mount1: exit status 1 (219.729722ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1202 19:59:59.746928  147070 retry.go:31] will retry after 316.620053ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-007973 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2020275693/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2020275693/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-007973 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2020275693/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-007973 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-007973
localhost/kicbase/echo-server:functional-007973
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007973 image ls --format short --alsologtostderr:
I1202 20:00:09.214881  156390 out.go:360] Setting OutFile to fd 1 ...
I1202 20:00:09.215205  156390 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.215220  156390 out.go:374] Setting ErrFile to fd 2...
I1202 20:00:09.215227  156390 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.215560  156390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 20:00:09.216438  156390 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.216587  156390 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.219282  156390 ssh_runner.go:195] Run: systemctl --version
I1202 20:00:09.222052  156390 main.go:143] libmachine: domain functional-007973 has defined MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.222525  156390 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:13:27:22", ip: ""} in network mk-functional-007973: {Iface:virbr1 ExpiryTime:2025-12-02 20:57:36 +0000 UTC Type:0 Mac:52:54:00:13:27:22 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-007973 Clientid:01:52:54:00:13:27:22}
I1202 20:00:09.222552  156390 main.go:143] libmachine: domain functional-007973 has defined IP address 192.168.39.38 and MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.222728  156390 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-007973/id_rsa Username:docker}
I1202 20:00:09.341325  156390 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-007973 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc      │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1           │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0    │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.10.1            │ cd073f4c5f6a8 │ 740kB  │
│ registry.k8s.io/pause                   │ 3.3               │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0    │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.1               │ da86e6ba6ca19 │ 747kB  │
│ localhost/minikube-local-cache-test     │ functional-007973 │ 9aac2ec33e58e │ 3.33kB │
│ registry.k8s.io/pause                   │ latest            │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest            │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-007973 │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0           │ a3e246e9556e9 │ 63.6MB │
└─────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007973 image ls --format table --alsologtostderr:
I1202 20:00:09.474729  156417 out.go:360] Setting OutFile to fd 1 ...
I1202 20:00:09.474819  156417 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.474827  156417 out.go:374] Setting ErrFile to fd 2...
I1202 20:00:09.474831  156417 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.475062  156417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 20:00:09.475637  156417 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.475764  156417 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.477802  156417 ssh_runner.go:195] Run: systemctl --version
I1202 20:00:09.480039  156417 main.go:143] libmachine: domain functional-007973 has defined MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.480553  156417 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:13:27:22", ip: ""} in network mk-functional-007973: {Iface:virbr1 ExpiryTime:2025-12-02 20:57:36 +0000 UTC Type:0 Mac:52:54:00:13:27:22 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-007973 Clientid:01:52:54:00:13:27:22}
I1202 20:00:09.480582  156417 main.go:143] libmachine: domain functional-007973 has defined IP address 192.168.39.38 and MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.480777  156417 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-007973/id_rsa Username:docker}
I1202 20:00:09.582273  156417 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-007973 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31468661"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76869776"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71976228"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0
600946a449c2a3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52744336"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"739536"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9aac2ec33e58efb0f735c272e0b004133820665034a8525378075097a36f14e8","repoDigests":["localhost/mi
nikube-local-cache-test@sha256:5e4f7f3c71c2b01315b8f0a9f9901b695d1af7b36c6c703c48a246850f6d3e0e"],"repoTags":["localhost/minikube-local-cache-test:functional-007973"],"size":"3330"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63582165"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"si
ze":"4631262"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79190589"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicb
ase/echo-server:latest","localhost/kicbase/echo-server:functional-007973"],"size":"4944818"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90816810"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007973 image ls --format json --alsologtostderr:
I1202 20:00:09.456840  156411 out.go:360] Setting OutFile to fd 1 ...
I1202 20:00:09.457166  156411 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.457178  156411 out.go:374] Setting ErrFile to fd 2...
I1202 20:00:09.457186  156411 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.457489  156411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 20:00:09.458291  156411 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.458466  156411 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.461271  156411 ssh_runner.go:195] Run: systemctl --version
I1202 20:00:09.463834  156411 main.go:143] libmachine: domain functional-007973 has defined MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.464321  156411 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:13:27:22", ip: ""} in network mk-functional-007973: {Iface:virbr1 ExpiryTime:2025-12-02 20:57:36 +0000 UTC Type:0 Mac:52:54:00:13:27:22 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-007973 Clientid:01:52:54:00:13:27:22}
I1202 20:00:09.464362  156411 main.go:143] libmachine: domain functional-007973 has defined IP address 192.168.39.38 and MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.464571  156411 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-007973/id_rsa Username:docker}
I1202 20:00:09.563009  156411 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls --format yaml --alsologtostderr
I1202 20:00:09.219252  147070 detect.go:223] nested VM detected
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-007973 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9aac2ec33e58efb0f735c272e0b004133820665034a8525378075097a36f14e8
repoDigests:
- localhost/minikube-local-cache-test@sha256:5e4f7f3c71c2b01315b8f0a9f9901b695d1af7b36c6c703c48a246850f6d3e0e
repoTags:
- localhost/minikube-local-cache-test:functional-007973
size: "3330"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f852fad6b028092c481b57e7fcd16936a8aec43c2e4dccf5a0600946a449c2a3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52744336"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:dfca5e5f4caae19c3ac20d841ab02fe19647ef0dd97c41424007cceb417af7db
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79190589"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:09c404d47c88be54eaaf0af6edaecdc1a417bcf04522ffeaf62c4dc0ed5a6d10
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63582165"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:a8ad62a46c568df922febd0986d02f88bfe5e1a8f5e8dd0bd02a0cafffba019b
repoTags:
- registry.k8s.io/pause:3.10.1
size: "739536"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-007973
size: "4944818"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:dd50de52ebf30a673c65da77c8b4af5cbc6be3c475a2d8165796a7a7bdd0b9d5
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90816810"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0ed737a63ad50cf0d7049b0bd88755be8d5bc9fb5e39efdece79639b998532f6
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71976228"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31468661"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5e3bd70d468022881b995e23abf02a2d39ee87ebacd7018f6c478d9e01870b8b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76869776"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007973 image ls --format yaml --alsologtostderr:
I1202 20:00:09.210544  156391 out.go:360] Setting OutFile to fd 1 ...
I1202 20:00:09.210825  156391 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.210836  156391 out.go:374] Setting ErrFile to fd 2...
I1202 20:00:09.210840  156391 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.211107  156391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 20:00:09.211984  156391 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.212181  156391 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.214582  156391 ssh_runner.go:195] Run: systemctl --version
I1202 20:00:09.217388  156391 main.go:143] libmachine: domain functional-007973 has defined MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.217965  156391 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:13:27:22", ip: ""} in network mk-functional-007973: {Iface:virbr1 ExpiryTime:2025-12-02 20:57:36 +0000 UTC Type:0 Mac:52:54:00:13:27:22 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-007973 Clientid:01:52:54:00:13:27:22}
I1202 20:00:09.217992  156391 main.go:143] libmachine: domain functional-007973 has defined IP address 192.168.39.38 and MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.218191  156391 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-007973/id_rsa Username:docker}
I1202 20:00:09.332002  156391 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (10.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-007973 ssh pgrep buildkitd: exit status 1 (203.099886ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image build -t localhost/my-image:functional-007973 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 image build -t localhost/my-image:functional-007973 testdata/build --alsologtostderr: (10.280554832s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-007973 image build -t localhost/my-image:functional-007973 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6cb6270251c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-007973
--> 50b2f3ca4c2
Successfully tagged localhost/my-image:functional-007973
50b2f3ca4c2fe118541f0ee98613ae1c859641ecbc69439caf445b971b64dc3c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-007973 image build -t localhost/my-image:functional-007973 testdata/build --alsologtostderr:
I1202 20:00:09.947213  156442 out.go:360] Setting OutFile to fd 1 ...
I1202 20:00:09.947460  156442 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.947469  156442 out.go:374] Setting ErrFile to fd 2...
I1202 20:00:09.947473  156442 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1202 20:00:09.947672  156442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
I1202 20:00:09.948276  156442 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.949126  156442 config.go:182] Loaded profile config "functional-007973": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1202 20:00:09.951985  156442 ssh_runner.go:195] Run: systemctl --version
I1202 20:00:09.954775  156442 main.go:143] libmachine: domain functional-007973 has defined MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.955222  156442 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:13:27:22", ip: ""} in network mk-functional-007973: {Iface:virbr1 ExpiryTime:2025-12-02 20:57:36 +0000 UTC Type:0 Mac:52:54:00:13:27:22 Iaid: IPaddr:192.168.39.38 Prefix:24 Hostname:functional-007973 Clientid:01:52:54:00:13:27:22}
I1202 20:00:09.955252  156442 main.go:143] libmachine: domain functional-007973 has defined IP address 192.168.39.38 and MAC address 52:54:00:13:27:22 in network mk-functional-007973
I1202 20:00:09.955410  156442 sshutil.go:53] new ssh client: &{IP:192.168.39.38 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/functional-007973/id_rsa Username:docker}
I1202 20:00:10.053234  156442 build_images.go:162] Building image from path: /tmp/build.1110899290.tar
I1202 20:00:10.053325  156442 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1202 20:00:10.072404  156442 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1110899290.tar
I1202 20:00:10.082196  156442 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1110899290.tar: stat -c "%s %y" /var/lib/minikube/build/build.1110899290.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1110899290.tar': No such file or directory
I1202 20:00:10.082233  156442 ssh_runner.go:362] scp /tmp/build.1110899290.tar --> /var/lib/minikube/build/build.1110899290.tar (3072 bytes)
I1202 20:00:10.146788  156442 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1110899290
I1202 20:00:10.162087  156442 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1110899290 -xf /var/lib/minikube/build/build.1110899290.tar
I1202 20:00:10.183479  156442 crio.go:315] Building image: /var/lib/minikube/build/build.1110899290
I1202 20:00:10.183551  156442 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-007973 /var/lib/minikube/build/build.1110899290 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1202 20:00:20.113591  156442 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-007973 /var/lib/minikube/build/build.1110899290 --cgroup-manager=cgroupfs: (9.930009703s)
I1202 20:00:20.113697  156442 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1110899290
I1202 20:00:20.132326  156442 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1110899290.tar
I1202 20:00:20.146112  156442 build_images.go:218] Built localhost/my-image:functional-007973 from /tmp/build.1110899290.tar
I1202 20:00:20.146157  156442 build_images.go:134] succeeded building to: functional-007973
I1202 20:00:20.146162  156442 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (10.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-007973
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.38:30098
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (2.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image load --daemon kicbase/echo-server:functional-007973 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 image load --daemon kicbase/echo-server:functional-007973 --alsologtostderr: (2.102845592s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (2.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.38:30098
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image load --daemon kicbase/echo-server:functional-007973 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.679381725s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-007973
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image load --daemon kicbase/echo-server:functional-007973 --alsologtostderr
2025/12/02 20:00:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-007973 image load --daemon kicbase/echo-server:functional-007973 --alsologtostderr: (1.186626194s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image save kicbase/echo-server:functional-007973 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image rm kicbase/echo-server:functional-007973 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-007973
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-007973 image save --daemon kicbase/echo-server:functional-007973 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-007973
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-007973
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-007973
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-007973
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (235.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1202 20:01:33.863923  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:33.870357  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:33.881812  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:33.903338  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:33.944969  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:34.026582  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:34.188199  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:34.509873  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:35.152007  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:36.433481  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:38.995389  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:44.117526  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:01:54.359002  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:02:12.168707  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:02:14.841015  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:02:55.802818  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:17.724747  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m55.152131919s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (235.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 kubectl -- rollout status deployment/busybox: (4.637051476s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-2sskz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-d7xb8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-xnqhg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-2sskz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-d7xb8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-xnqhg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-2sskz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-d7xb8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-xnqhg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-2sskz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-2sskz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-d7xb8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-d7xb8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-xnqhg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 kubectl -- exec busybox-7b57f96db7-xnqhg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 node add --alsologtostderr -v 5
E1202 20:04:47.761300  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:47.767795  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:47.779286  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:47.800769  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:47.842388  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:47.924099  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:48.085887  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:48.407602  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:49.049388  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:50.330933  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:52.892971  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:04:58.014384  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:05:08.256202  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:05:28.738365  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 node add --alsologtostderr -v 5: (44.972448889s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-629500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp testdata/cp-test.txt ha-629500:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile777062782/001/cp-test_ha-629500.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500:/home/docker/cp-test.txt ha-629500-m02:/home/docker/cp-test_ha-629500_ha-629500-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m02 "sudo cat /home/docker/cp-test_ha-629500_ha-629500-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500:/home/docker/cp-test.txt ha-629500-m03:/home/docker/cp-test_ha-629500_ha-629500-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m03 "sudo cat /home/docker/cp-test_ha-629500_ha-629500-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500:/home/docker/cp-test.txt ha-629500-m04:/home/docker/cp-test_ha-629500_ha-629500-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m04 "sudo cat /home/docker/cp-test_ha-629500_ha-629500-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp testdata/cp-test.txt ha-629500-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile777062782/001/cp-test_ha-629500-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m02:/home/docker/cp-test.txt ha-629500:/home/docker/cp-test_ha-629500-m02_ha-629500.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500 "sudo cat /home/docker/cp-test_ha-629500-m02_ha-629500.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m02:/home/docker/cp-test.txt ha-629500-m03:/home/docker/cp-test_ha-629500-m02_ha-629500-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m03 "sudo cat /home/docker/cp-test_ha-629500-m02_ha-629500-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m02:/home/docker/cp-test.txt ha-629500-m04:/home/docker/cp-test_ha-629500-m02_ha-629500-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m04 "sudo cat /home/docker/cp-test_ha-629500-m02_ha-629500-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp testdata/cp-test.txt ha-629500-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile777062782/001/cp-test_ha-629500-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m03:/home/docker/cp-test.txt ha-629500:/home/docker/cp-test_ha-629500-m03_ha-629500.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500 "sudo cat /home/docker/cp-test_ha-629500-m03_ha-629500.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m03:/home/docker/cp-test.txt ha-629500-m02:/home/docker/cp-test_ha-629500-m03_ha-629500-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m02 "sudo cat /home/docker/cp-test_ha-629500-m03_ha-629500-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m03:/home/docker/cp-test.txt ha-629500-m04:/home/docker/cp-test_ha-629500-m03_ha-629500-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m04 "sudo cat /home/docker/cp-test_ha-629500-m03_ha-629500-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp testdata/cp-test.txt ha-629500-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile777062782/001/cp-test_ha-629500-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m04:/home/docker/cp-test.txt ha-629500:/home/docker/cp-test_ha-629500-m04_ha-629500.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500 "sudo cat /home/docker/cp-test_ha-629500-m04_ha-629500.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m04:/home/docker/cp-test.txt ha-629500-m02:/home/docker/cp-test_ha-629500-m04_ha-629500-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m02 "sudo cat /home/docker/cp-test_ha-629500-m04_ha-629500-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 cp ha-629500-m04:/home/docker/cp-test.txt ha-629500-m03:/home/docker/cp-test_ha-629500-m04_ha-629500-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 ssh -n ha-629500-m03 "sudo cat /home/docker/cp-test_ha-629500-m04_ha-629500-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (74.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 node stop m02 --alsologtostderr -v 5
E1202 20:06:09.699853  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:06:33.864110  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 node stop m02 --alsologtostderr -v 5: (1m14.396892522s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5: exit status 7 (536.45162ms)

                                                
                                                
-- stdout --
	ha-629500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-629500-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-629500-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-629500-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:06:59.137300  159691 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:06:59.137415  159691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:06:59.137423  159691 out.go:374] Setting ErrFile to fd 2...
	I1202 20:06:59.137426  159691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:06:59.137672  159691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:06:59.137868  159691 out.go:368] Setting JSON to false
	I1202 20:06:59.137895  159691 mustload.go:66] Loading cluster: ha-629500
	I1202 20:06:59.137962  159691 notify.go:221] Checking for updates...
	I1202 20:06:59.138430  159691 config.go:182] Loaded profile config "ha-629500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:06:59.138451  159691 status.go:174] checking status of ha-629500 ...
	I1202 20:06:59.140843  159691 status.go:371] ha-629500 host status = "Running" (err=<nil>)
	I1202 20:06:59.140862  159691 host.go:66] Checking if "ha-629500" exists ...
	I1202 20:06:59.143773  159691 main.go:143] libmachine: domain ha-629500 has defined MAC address 52:54:00:3e:68:ed in network mk-ha-629500
	I1202 20:06:59.144407  159691 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3e:68:ed", ip: ""} in network mk-ha-629500: {Iface:virbr1 ExpiryTime:2025-12-02 21:00:58 +0000 UTC Type:0 Mac:52:54:00:3e:68:ed Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-629500 Clientid:01:52:54:00:3e:68:ed}
	I1202 20:06:59.144452  159691 main.go:143] libmachine: domain ha-629500 has defined IP address 192.168.39.147 and MAC address 52:54:00:3e:68:ed in network mk-ha-629500
	I1202 20:06:59.144706  159691 host.go:66] Checking if "ha-629500" exists ...
	I1202 20:06:59.145107  159691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:06:59.147790  159691 main.go:143] libmachine: domain ha-629500 has defined MAC address 52:54:00:3e:68:ed in network mk-ha-629500
	I1202 20:06:59.148202  159691 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3e:68:ed", ip: ""} in network mk-ha-629500: {Iface:virbr1 ExpiryTime:2025-12-02 21:00:58 +0000 UTC Type:0 Mac:52:54:00:3e:68:ed Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-629500 Clientid:01:52:54:00:3e:68:ed}
	I1202 20:06:59.148225  159691 main.go:143] libmachine: domain ha-629500 has defined IP address 192.168.39.147 and MAC address 52:54:00:3e:68:ed in network mk-ha-629500
	I1202 20:06:59.148411  159691 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/ha-629500/id_rsa Username:docker}
	I1202 20:06:59.245944  159691 ssh_runner.go:195] Run: systemctl --version
	I1202 20:06:59.253975  159691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:06:59.273649  159691 kubeconfig.go:125] found "ha-629500" server: "https://192.168.39.254:8443"
	I1202 20:06:59.273697  159691 api_server.go:166] Checking apiserver status ...
	I1202 20:06:59.273733  159691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:06:59.299304  159691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup
	W1202 20:06:59.312630  159691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:06:59.312694  159691 ssh_runner.go:195] Run: ls
	I1202 20:06:59.318380  159691 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1202 20:06:59.326134  159691 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1202 20:06:59.326166  159691 status.go:463] ha-629500 apiserver status = Running (err=<nil>)
	I1202 20:06:59.326178  159691 status.go:176] ha-629500 status: &{Name:ha-629500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:06:59.326201  159691 status.go:174] checking status of ha-629500-m02 ...
	I1202 20:06:59.327941  159691 status.go:371] ha-629500-m02 host status = "Stopped" (err=<nil>)
	I1202 20:06:59.327965  159691 status.go:384] host is not running, skipping remaining checks
	I1202 20:06:59.327971  159691 status.go:176] ha-629500-m02 status: &{Name:ha-629500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:06:59.327988  159691 status.go:174] checking status of ha-629500-m03 ...
	I1202 20:06:59.329418  159691 status.go:371] ha-629500-m03 host status = "Running" (err=<nil>)
	I1202 20:06:59.329438  159691 host.go:66] Checking if "ha-629500-m03" exists ...
	I1202 20:06:59.332233  159691 main.go:143] libmachine: domain ha-629500-m03 has defined MAC address 52:54:00:e2:02:06 in network mk-ha-629500
	I1202 20:06:59.332710  159691 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:02:06", ip: ""} in network mk-ha-629500: {Iface:virbr1 ExpiryTime:2025-12-02 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e2:02:06 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-629500-m03 Clientid:01:52:54:00:e2:02:06}
	I1202 20:06:59.332738  159691 main.go:143] libmachine: domain ha-629500-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:e2:02:06 in network mk-ha-629500
	I1202 20:06:59.332930  159691 host.go:66] Checking if "ha-629500-m03" exists ...
	I1202 20:06:59.333156  159691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:06:59.335509  159691 main.go:143] libmachine: domain ha-629500-m03 has defined MAC address 52:54:00:e2:02:06 in network mk-ha-629500
	I1202 20:06:59.336052  159691 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:02:06", ip: ""} in network mk-ha-629500: {Iface:virbr1 ExpiryTime:2025-12-02 21:03:25 +0000 UTC Type:0 Mac:52:54:00:e2:02:06 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-629500-m03 Clientid:01:52:54:00:e2:02:06}
	I1202 20:06:59.336077  159691 main.go:143] libmachine: domain ha-629500-m03 has defined IP address 192.168.39.174 and MAC address 52:54:00:e2:02:06 in network mk-ha-629500
	I1202 20:06:59.336255  159691 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/ha-629500-m03/id_rsa Username:docker}
	I1202 20:06:59.425689  159691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:06:59.445381  159691 kubeconfig.go:125] found "ha-629500" server: "https://192.168.39.254:8443"
	I1202 20:06:59.445411  159691 api_server.go:166] Checking apiserver status ...
	I1202 20:06:59.445448  159691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:06:59.466753  159691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1848/cgroup
	W1202 20:06:59.478613  159691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1848/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:06:59.478702  159691 ssh_runner.go:195] Run: ls
	I1202 20:06:59.483965  159691 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1202 20:06:59.489053  159691 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1202 20:06:59.489081  159691 status.go:463] ha-629500-m03 apiserver status = Running (err=<nil>)
	I1202 20:06:59.489094  159691 status.go:176] ha-629500-m03 status: &{Name:ha-629500-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:06:59.489116  159691 status.go:174] checking status of ha-629500-m04 ...
	I1202 20:06:59.490988  159691 status.go:371] ha-629500-m04 host status = "Running" (err=<nil>)
	I1202 20:06:59.491014  159691 host.go:66] Checking if "ha-629500-m04" exists ...
	I1202 20:06:59.494051  159691 main.go:143] libmachine: domain ha-629500-m04 has defined MAC address 52:54:00:42:f9:11 in network mk-ha-629500
	I1202 20:06:59.494503  159691 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:f9:11", ip: ""} in network mk-ha-629500: {Iface:virbr1 ExpiryTime:2025-12-02 21:05:03 +0000 UTC Type:0 Mac:52:54:00:42:f9:11 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-629500-m04 Clientid:01:52:54:00:42:f9:11}
	I1202 20:06:59.494543  159691 main.go:143] libmachine: domain ha-629500-m04 has defined IP address 192.168.39.42 and MAC address 52:54:00:42:f9:11 in network mk-ha-629500
	I1202 20:06:59.494744  159691 host.go:66] Checking if "ha-629500-m04" exists ...
	I1202 20:06:59.495026  159691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:06:59.497915  159691 main.go:143] libmachine: domain ha-629500-m04 has defined MAC address 52:54:00:42:f9:11 in network mk-ha-629500
	I1202 20:06:59.498419  159691 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:f9:11", ip: ""} in network mk-ha-629500: {Iface:virbr1 ExpiryTime:2025-12-02 21:05:03 +0000 UTC Type:0 Mac:52:54:00:42:f9:11 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-629500-m04 Clientid:01:52:54:00:42:f9:11}
	I1202 20:06:59.498444  159691 main.go:143] libmachine: domain ha-629500-m04 has defined IP address 192.168.39.42 and MAC address 52:54:00:42:f9:11 in network mk-ha-629500
	I1202 20:06:59.498589  159691 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/ha-629500-m04/id_rsa Username:docker}
	I1202 20:06:59.585252  159691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:06:59.605039  159691 status.go:176] ha-629500-m04 status: &{Name:ha-629500-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (74.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (35.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 node start m02 --alsologtostderr -v 5
E1202 20:07:01.566593  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:07:12.171013  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:07:31.621790  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 node start m02 --alsologtostderr -v 5: (34.122182886s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (35.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 stop --alsologtostderr -v 5
E1202 20:08:35.247063  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:09:47.762472  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:10:15.463257  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:11:33.863893  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 stop --alsologtostderr -v 5: (4m17.510966523s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 start --wait true --alsologtostderr -v 5
E1202 20:12:12.168575  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 start --wait true --alsologtostderr -v 5: (1m52.247746134s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 node delete m03 --alsologtostderr -v 5: (17.418837347s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (255.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 stop --alsologtostderr -v 5
E1202 20:14:47.763731  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:16:33.864038  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:12.169871  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:17:56.930343  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 stop --alsologtostderr -v 5: (4m15.37875302s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5: exit status 7 (69.323809ms)

                                                
                                                
-- stdout --
	ha-629500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-629500-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-629500-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:18:20.063402  163347 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:18:20.063683  163347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:18:20.063693  163347 out.go:374] Setting ErrFile to fd 2...
	I1202 20:18:20.063697  163347 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:18:20.063941  163347 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:18:20.064196  163347 out.go:368] Setting JSON to false
	I1202 20:18:20.064225  163347 mustload.go:66] Loading cluster: ha-629500
	I1202 20:18:20.064385  163347 notify.go:221] Checking for updates...
	I1202 20:18:20.064753  163347 config.go:182] Loaded profile config "ha-629500": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:18:20.064776  163347 status.go:174] checking status of ha-629500 ...
	I1202 20:18:20.067185  163347 status.go:371] ha-629500 host status = "Stopped" (err=<nil>)
	I1202 20:18:20.067206  163347 status.go:384] host is not running, skipping remaining checks
	I1202 20:18:20.067212  163347 status.go:176] ha-629500 status: &{Name:ha-629500 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:18:20.067235  163347 status.go:174] checking status of ha-629500-m02 ...
	I1202 20:18:20.068755  163347 status.go:371] ha-629500-m02 host status = "Stopped" (err=<nil>)
	I1202 20:18:20.068772  163347 status.go:384] host is not running, skipping remaining checks
	I1202 20:18:20.068778  163347 status.go:176] ha-629500-m02 status: &{Name:ha-629500-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:18:20.068794  163347 status.go:174] checking status of ha-629500-m04 ...
	I1202 20:18:20.070473  163347 status.go:371] ha-629500-m04 host status = "Stopped" (err=<nil>)
	I1202 20:18:20.070487  163347 status.go:384] host is not running, skipping remaining checks
	I1202 20:18:20.070491  163347 status.go:176] ha-629500-m04 status: &{Name:ha-629500-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (255.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (99.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1202 20:19:47.761453  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m38.86852898s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (99.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 node add --control-plane --alsologtostderr -v 5
E1202 20:21:10.826496  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-629500 node add --control-plane --alsologtostderr -v 5: (1m15.824446073s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-629500 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-124580 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1202 20:21:33.864190  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:22:12.168571  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-124580 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (52.687795899s)
--- PASS: TestJSONOutput/start/Command (52.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-124580 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-124580 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-124580 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-124580 --output=json --user=testUser: (6.929733971s)
--- PASS: TestJSONOutput/stop/Command (6.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-048995 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-048995 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (78.531366ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"976f8872-2d45-46a7-81e3-b7995faa4ec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-048995] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f0d6e8a-2080-4bd6-aed0-a239f2a9927d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21997"}}
	{"specversion":"1.0","id":"ace366ae-39e4-4221-bda5-07c9fa335919","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8ad33070-817f-4fb0-a48e-2b1020c5155f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig"}}
	{"specversion":"1.0","id":"73e802cd-9b86-4865-9ad8-0cdb222df156","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube"}}
	{"specversion":"1.0","id":"b36203f7-099a-43cf-b2da-fd70ceb81df6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cbb5b5a7-8daa-4e0f-b528-9dd4fc308006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f917b7c9-eb6b-4f3c-b247-ca6e03e94c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-048995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-048995
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (77.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-481723 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-481723 --driver=kvm2  --container-runtime=crio: (36.257047133s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-484088 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-484088 --driver=kvm2  --container-runtime=crio: (38.368832194s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-481723
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-484088
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-484088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-484088
helpers_test.go:175: Cleaning up "first-481723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-481723
--- PASS: TestMinikubeProfile (77.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-741853 --memory=3072 --mount-string /tmp/TestMountStartserial1394321059/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-741853 --memory=3072 --mount-string /tmp/TestMountStartserial1394321059/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.412390115s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-741853 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-741853 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-761823 --memory=3072 --mount-string /tmp/TestMountStartserial1394321059/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-761823 --memory=3072 --mount-string /tmp/TestMountStartserial1394321059/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.970950356s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-761823 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-761823 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-741853 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-761823 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-761823 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-761823
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-761823: (1.250038015s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-761823
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-761823: (18.024050962s)
--- PASS: TestMountStart/serial/RestartStopped (19.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-761823 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-761823 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519659 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1202 20:24:47.762153  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:25:15.249436  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-519659 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m36.335391502s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-519659 -- rollout status deployment/busybox: (4.524785512s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-5mtc7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-zx4tc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-5mtc7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-zx4tc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-5mtc7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-zx4tc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.20s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-5mtc7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-5mtc7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-zx4tc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-519659 -- exec busybox-7b57f96db7-zx4tc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-519659 -v=5 --alsologtostderr
E1202 20:26:33.863096  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-519659 -v=5 --alsologtostderr: (42.137195899s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.60s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-519659 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp testdata/cp-test.txt multinode-519659:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp multinode-519659:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1539050631/001/cp-test_multinode-519659.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp multinode-519659:/home/docker/cp-test.txt multinode-519659-m02:/home/docker/cp-test_multinode-519659_multinode-519659-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m02 "sudo cat /home/docker/cp-test_multinode-519659_multinode-519659-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp multinode-519659:/home/docker/cp-test.txt multinode-519659-m03:/home/docker/cp-test_multinode-519659_multinode-519659-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m03 "sudo cat /home/docker/cp-test_multinode-519659_multinode-519659-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp testdata/cp-test.txt multinode-519659-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp multinode-519659-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1539050631/001/cp-test_multinode-519659-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp multinode-519659-m02:/home/docker/cp-test.txt multinode-519659:/home/docker/cp-test_multinode-519659-m02_multinode-519659.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m02 "sudo cat /home/docker/cp-test.txt"
E1202 20:27:12.168467  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659 "sudo cat /home/docker/cp-test_multinode-519659-m02_multinode-519659.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp multinode-519659-m02:/home/docker/cp-test.txt multinode-519659-m03:/home/docker/cp-test_multinode-519659-m02_multinode-519659-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m03 "sudo cat /home/docker/cp-test_multinode-519659-m02_multinode-519659-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp testdata/cp-test.txt multinode-519659-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp multinode-519659-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1539050631/001/cp-test_multinode-519659-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp multinode-519659-m03:/home/docker/cp-test.txt multinode-519659:/home/docker/cp-test_multinode-519659-m03_multinode-519659.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659 "sudo cat /home/docker/cp-test_multinode-519659-m03_multinode-519659.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 cp multinode-519659-m03:/home/docker/cp-test.txt multinode-519659-m02:/home/docker/cp-test_multinode-519659-m03_multinode-519659-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 ssh -n multinode-519659-m02 "sudo cat /home/docker/cp-test_multinode-519659-m03_multinode-519659-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-519659 node stop m03: (1.516420724s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-519659 status: exit status 7 (357.272481ms)

                                                
                                                
-- stdout --
	multinode-519659
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-519659-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-519659-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-519659 status --alsologtostderr: exit status 7 (353.797785ms)

                                                
                                                
-- stdout --
	multinode-519659
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-519659-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-519659-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:27:16.886598  168788 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:27:16.886844  168788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:27:16.886852  168788 out.go:374] Setting ErrFile to fd 2...
	I1202 20:27:16.886857  168788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:27:16.887053  168788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:27:16.887212  168788 out.go:368] Setting JSON to false
	I1202 20:27:16.887237  168788 mustload.go:66] Loading cluster: multinode-519659
	I1202 20:27:16.887371  168788 notify.go:221] Checking for updates...
	I1202 20:27:16.887587  168788 config.go:182] Loaded profile config "multinode-519659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:27:16.887602  168788 status.go:174] checking status of multinode-519659 ...
	I1202 20:27:16.889738  168788 status.go:371] multinode-519659 host status = "Running" (err=<nil>)
	I1202 20:27:16.889757  168788 host.go:66] Checking if "multinode-519659" exists ...
	I1202 20:27:16.892561  168788 main.go:143] libmachine: domain multinode-519659 has defined MAC address 52:54:00:fe:38:2b in network mk-multinode-519659
	I1202 20:27:16.893197  168788 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fe:38:2b", ip: ""} in network mk-multinode-519659: {Iface:virbr1 ExpiryTime:2025-12-02 21:24:56 +0000 UTC Type:0 Mac:52:54:00:fe:38:2b Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-519659 Clientid:01:52:54:00:fe:38:2b}
	I1202 20:27:16.893228  168788 main.go:143] libmachine: domain multinode-519659 has defined IP address 192.168.39.193 and MAC address 52:54:00:fe:38:2b in network mk-multinode-519659
	I1202 20:27:16.893370  168788 host.go:66] Checking if "multinode-519659" exists ...
	I1202 20:27:16.893557  168788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:27:16.896129  168788 main.go:143] libmachine: domain multinode-519659 has defined MAC address 52:54:00:fe:38:2b in network mk-multinode-519659
	I1202 20:27:16.896628  168788 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fe:38:2b", ip: ""} in network mk-multinode-519659: {Iface:virbr1 ExpiryTime:2025-12-02 21:24:56 +0000 UTC Type:0 Mac:52:54:00:fe:38:2b Iaid: IPaddr:192.168.39.193 Prefix:24 Hostname:multinode-519659 Clientid:01:52:54:00:fe:38:2b}
	I1202 20:27:16.896678  168788 main.go:143] libmachine: domain multinode-519659 has defined IP address 192.168.39.193 and MAC address 52:54:00:fe:38:2b in network mk-multinode-519659
	I1202 20:27:16.896830  168788 sshutil.go:53] new ssh client: &{IP:192.168.39.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/multinode-519659/id_rsa Username:docker}
	I1202 20:27:16.983921  168788 ssh_runner.go:195] Run: systemctl --version
	I1202 20:27:16.990320  168788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:27:17.014233  168788 kubeconfig.go:125] found "multinode-519659" server: "https://192.168.39.193:8443"
	I1202 20:27:17.014270  168788 api_server.go:166] Checking apiserver status ...
	I1202 20:27:17.014312  168788 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1202 20:27:17.040465  168788 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1343/cgroup
	W1202 20:27:17.054575  168788 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1343/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1202 20:27:17.054639  168788 ssh_runner.go:195] Run: ls
	I1202 20:27:17.060684  168788 api_server.go:253] Checking apiserver healthz at https://192.168.39.193:8443/healthz ...
	I1202 20:27:17.065745  168788 api_server.go:279] https://192.168.39.193:8443/healthz returned 200:
	ok
	I1202 20:27:17.065772  168788 status.go:463] multinode-519659 apiserver status = Running (err=<nil>)
	I1202 20:27:17.065783  168788 status.go:176] multinode-519659 status: &{Name:multinode-519659 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:27:17.065804  168788 status.go:174] checking status of multinode-519659-m02 ...
	I1202 20:27:17.067714  168788 status.go:371] multinode-519659-m02 host status = "Running" (err=<nil>)
	I1202 20:27:17.067733  168788 host.go:66] Checking if "multinode-519659-m02" exists ...
	I1202 20:27:17.070502  168788 main.go:143] libmachine: domain multinode-519659-m02 has defined MAC address 52:54:00:3c:e2:5c in network mk-multinode-519659
	I1202 20:27:17.071145  168788 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:e2:5c", ip: ""} in network mk-multinode-519659: {Iface:virbr1 ExpiryTime:2025-12-02 21:25:50 +0000 UTC Type:0 Mac:52:54:00:3c:e2:5c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-519659-m02 Clientid:01:52:54:00:3c:e2:5c}
	I1202 20:27:17.071174  168788 main.go:143] libmachine: domain multinode-519659-m02 has defined IP address 192.168.39.190 and MAC address 52:54:00:3c:e2:5c in network mk-multinode-519659
	I1202 20:27:17.071370  168788 host.go:66] Checking if "multinode-519659-m02" exists ...
	I1202 20:27:17.071636  168788 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1202 20:27:17.074450  168788 main.go:143] libmachine: domain multinode-519659-m02 has defined MAC address 52:54:00:3c:e2:5c in network mk-multinode-519659
	I1202 20:27:17.074912  168788 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3c:e2:5c", ip: ""} in network mk-multinode-519659: {Iface:virbr1 ExpiryTime:2025-12-02 21:25:50 +0000 UTC Type:0 Mac:52:54:00:3c:e2:5c Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:multinode-519659-m02 Clientid:01:52:54:00:3c:e2:5c}
	I1202 20:27:17.074936  168788 main.go:143] libmachine: domain multinode-519659-m02 has defined IP address 192.168.39.190 and MAC address 52:54:00:3c:e2:5c in network mk-multinode-519659
	I1202 20:27:17.075070  168788 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21997-143119/.minikube/machines/multinode-519659-m02/id_rsa Username:docker}
	I1202 20:27:17.159179  168788 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1202 20:27:17.177370  168788 status.go:176] multinode-519659-m02 status: &{Name:multinode-519659-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:27:17.177411  168788 status.go:174] checking status of multinode-519659-m03 ...
	I1202 20:27:17.179458  168788 status.go:371] multinode-519659-m03 host status = "Stopped" (err=<nil>)
	I1202 20:27:17.179479  168788 status.go:384] host is not running, skipping remaining checks
	I1202 20:27:17.179485  168788 status.go:176] multinode-519659-m03 status: &{Name:multinode-519659-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-519659 node start m03 -v=5 --alsologtostderr: (39.229200387s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (298.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-519659
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-519659
E1202 20:29:47.764430  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-519659: (2m56.885899338s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519659 --wait=true -v=5 --alsologtostderr
E1202 20:31:33.864067  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:32:12.168907  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-519659 --wait=true -v=5 --alsologtostderr: (2m1.550098043s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-519659
--- PASS: TestMultiNode/serial/RestartKeepsNodes (298.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-519659 node delete m03: (2.232998378s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (162.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 stop
E1202 20:34:36.932391  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:34:47.761618  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-519659 stop: (2m41.882591338s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-519659 status: exit status 7 (65.480664ms)

                                                
                                                
-- stdout --
	multinode-519659
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-519659-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-519659 status --alsologtostderr: exit status 7 (65.254487ms)

                                                
                                                
-- stdout --
	multinode-519659
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-519659-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:35:40.218359  171129 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:35:40.218463  171129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:35:40.218468  171129 out.go:374] Setting ErrFile to fd 2...
	I1202 20:35:40.218471  171129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:35:40.218666  171129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:35:40.218833  171129 out.go:368] Setting JSON to false
	I1202 20:35:40.218858  171129 mustload.go:66] Loading cluster: multinode-519659
	I1202 20:35:40.218982  171129 notify.go:221] Checking for updates...
	I1202 20:35:40.219181  171129 config.go:182] Loaded profile config "multinode-519659": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:35:40.219196  171129 status.go:174] checking status of multinode-519659 ...
	I1202 20:35:40.221327  171129 status.go:371] multinode-519659 host status = "Stopped" (err=<nil>)
	I1202 20:35:40.221347  171129 status.go:384] host is not running, skipping remaining checks
	I1202 20:35:40.221353  171129 status.go:176] multinode-519659 status: &{Name:multinode-519659 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1202 20:35:40.221369  171129 status.go:174] checking status of multinode-519659-m02 ...
	I1202 20:35:40.222971  171129 status.go:371] multinode-519659-m02 host status = "Stopped" (err=<nil>)
	I1202 20:35:40.222993  171129 status.go:384] host is not running, skipping remaining checks
	I1202 20:35:40.223000  171129 status.go:176] multinode-519659-m02 status: &{Name:multinode-519659-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (162.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519659 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1202 20:36:33.863651  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-519659 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m21.508317836s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-519659 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-519659
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519659-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-519659-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (82.684808ms)

                                                
                                                
-- stdout --
	* [multinode-519659-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-519659-m02' is duplicated with machine name 'multinode-519659-m02' in profile 'multinode-519659'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-519659-m03 --driver=kvm2  --container-runtime=crio
E1202 20:37:12.171648  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-519659-m03 --driver=kvm2  --container-runtime=crio: (42.762907043s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-519659
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-519659: exit status 80 (200.84711ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-519659 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-519659-m03 already exists in multinode-519659-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-519659-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.93s)

                                                
                                    
x
+
TestScheduledStopUnix (108.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-945154 --memory=3072 --driver=kvm2  --container-runtime=crio
E1202 20:39:47.764937  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-945154 --memory=3072 --driver=kvm2  --container-runtime=crio: (37.137082852s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-945154 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 20:40:18.640311  173293 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:40:18.640426  173293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:40:18.640434  173293 out.go:374] Setting ErrFile to fd 2...
	I1202 20:40:18.640438  173293 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:40:18.640613  173293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:40:18.640868  173293 out.go:368] Setting JSON to false
	I1202 20:40:18.640953  173293 mustload.go:66] Loading cluster: scheduled-stop-945154
	I1202 20:40:18.641253  173293 config.go:182] Loaded profile config "scheduled-stop-945154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:40:18.641316  173293 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/config.json ...
	I1202 20:40:18.641530  173293 mustload.go:66] Loading cluster: scheduled-stop-945154
	I1202 20:40:18.641646  173293 config.go:182] Loaded profile config "scheduled-stop-945154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-945154 -n scheduled-stop-945154
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-945154 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 20:40:18.939616  173340 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:40:18.939878  173340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:40:18.939887  173340 out.go:374] Setting ErrFile to fd 2...
	I1202 20:40:18.939891  173340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:40:18.940083  173340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:40:18.940323  173340 out.go:368] Setting JSON to false
	I1202 20:40:18.940518  173340 daemonize_unix.go:73] killing process 173328 as it is an old scheduled stop
	I1202 20:40:18.940623  173340 mustload.go:66] Loading cluster: scheduled-stop-945154
	I1202 20:40:18.941000  173340 config.go:182] Loaded profile config "scheduled-stop-945154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:40:18.941084  173340 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/config.json ...
	I1202 20:40:18.941272  173340 mustload.go:66] Loading cluster: scheduled-stop-945154
	I1202 20:40:18.941391  173340 config.go:182] Loaded profile config "scheduled-stop-945154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1202 20:40:18.946127  147070 retry.go:31] will retry after 76.356µs: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.947287  147070 retry.go:31] will retry after 90.833µs: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.948439  147070 retry.go:31] will retry after 275.112µs: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.949626  147070 retry.go:31] will retry after 306.132µs: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.950716  147070 retry.go:31] will retry after 671.597µs: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.951848  147070 retry.go:31] will retry after 771.244µs: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.952974  147070 retry.go:31] will retry after 1.386946ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.955228  147070 retry.go:31] will retry after 1.074568ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.956390  147070 retry.go:31] will retry after 2.342514ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.959651  147070 retry.go:31] will retry after 2.795462ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.962886  147070 retry.go:31] will retry after 5.649687ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.969146  147070 retry.go:31] will retry after 8.334092ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.978418  147070 retry.go:31] will retry after 16.840806ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:18.995746  147070 retry.go:31] will retry after 23.604841ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:19.020030  147070 retry.go:31] will retry after 34.226758ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
I1202 20:40:19.055312  147070 retry.go:31] will retry after 58.179457ms: open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-945154 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-945154 -n scheduled-stop-945154
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-945154
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-945154 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1202 20:40:44.706175  173505 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:40:44.706289  173505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:40:44.706294  173505 out.go:374] Setting ErrFile to fd 2...
	I1202 20:40:44.706299  173505 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:40:44.706479  173505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:40:44.706730  173505 out.go:368] Setting JSON to false
	I1202 20:40:44.706806  173505 mustload.go:66] Loading cluster: scheduled-stop-945154
	I1202 20:40:44.707129  173505 config.go:182] Loaded profile config "scheduled-stop-945154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:40:44.707192  173505 profile.go:143] Saving config to /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/scheduled-stop-945154/config.json ...
	I1202 20:40:44.707378  173505 mustload.go:66] Loading cluster: scheduled-stop-945154
	I1202 20:40:44.707480  173505 config.go:182] Loaded profile config "scheduled-stop-945154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-945154
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-945154: exit status 7 (63.772144ms)

                                                
                                                
-- stdout --
	scheduled-stop-945154
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-945154 -n scheduled-stop-945154
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-945154 -n scheduled-stop-945154: exit status 7 (63.411921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-945154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-945154
--- PASS: TestScheduledStopUnix (108.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (379.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1922777016 start -p running-upgrade-651144 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1922777016 start -p running-upgrade-651144 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m21.082399139s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-651144 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-651144 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m54.101825081s)
helpers_test.go:175: Cleaning up "running-upgrade-651144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-651144
E1202 20:49:47.761564  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestRunningBinaryUpgrade (379.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (154.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.785550888s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-950537
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-950537: (1.970556833s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-950537 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-950537 status --format={{.Host}}: exit status 7 (78.134907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1202 20:44:47.763475  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.745069336s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-950537 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (87.078826ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-950537] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-950537
	    minikube start -p kubernetes-upgrade-950537 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9505372 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-950537 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-950537 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.564296516s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-950537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-950537
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-950537: (1.005540476s)
--- PASS: TestKubernetesUpgrade (154.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-897006 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-897006 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (108.015624ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-897006] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-897006 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1202 20:41:33.863559  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:41:55.251796  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:42:12.169077  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-897006 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m38.218183205s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-897006 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-897006 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-897006 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (29.283273788s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-897006 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-897006 status -o json: exit status 2 (225.496902ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-897006","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-897006
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-897006 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-897006 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (19.929951913s)
--- PASS: TestNoKubernetes/serial/Start (19.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21997-143119/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-897006 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-897006 "sudo systemctl is-active --quiet service kubelet": exit status 1 (170.128915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-897006
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-897006: (1.316091015s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (53.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-897006 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-897006 --driver=kvm2  --container-runtime=crio: (53.086414101s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (53.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-897006 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-897006 "sudo systemctl is-active --quiet service kubelet": exit status 1 (175.734826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-019279 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-019279 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (122.916226ms)

                                                
                                                
-- stdout --
	* [false-019279] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21997
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1202 20:44:59.632755  177429 out.go:360] Setting OutFile to fd 1 ...
	I1202 20:44:59.633019  177429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:44:59.633029  177429 out.go:374] Setting ErrFile to fd 2...
	I1202 20:44:59.633034  177429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1202 20:44:59.633240  177429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21997-143119/.minikube/bin
	I1202 20:44:59.633719  177429 out.go:368] Setting JSON to false
	I1202 20:44:59.634599  177429 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8844,"bootTime":1764699456,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1202 20:44:59.634671  177429 start.go:143] virtualization: kvm guest
	I1202 20:44:59.636900  177429 out.go:179] * [false-019279] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1202 20:44:59.638408  177429 out.go:179]   - MINIKUBE_LOCATION=21997
	I1202 20:44:59.638404  177429 notify.go:221] Checking for updates...
	I1202 20:44:59.640792  177429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1202 20:44:59.642489  177429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21997-143119/kubeconfig
	I1202 20:44:59.643766  177429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21997-143119/.minikube
	I1202 20:44:59.645072  177429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1202 20:44:59.646463  177429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1202 20:44:59.648342  177429 config.go:182] Loaded profile config "cert-expiration-095611": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1202 20:44:59.648489  177429 config.go:182] Loaded profile config "kubernetes-upgrade-950537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1202 20:44:59.648638  177429 config.go:182] Loaded profile config "running-upgrade-651144": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1202 20:44:59.648780  177429 driver.go:422] Setting default libvirt URI to qemu:///system
	I1202 20:44:59.686052  177429 out.go:179] * Using the kvm2 driver based on user configuration
	I1202 20:44:59.687132  177429 start.go:309] selected driver: kvm2
	I1202 20:44:59.687155  177429 start.go:927] validating driver "kvm2" against <nil>
	I1202 20:44:59.687172  177429 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1202 20:44:59.689031  177429 out.go:203] 
	W1202 20:44:59.690291  177429 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1202 20:44:59.691571  177429 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-019279 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-019279" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:42:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.63:8443
name: cert-expiration-095611
contexts:
- context:
cluster: cert-expiration-095611
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:42:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-095611
name: cert-expiration-095611
current-context: ""
kind: Config
users:
- name: cert-expiration-095611
user:
client-certificate: /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/cert-expiration-095611/client.crt
client-key: /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/cert-expiration-095611/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-019279

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-019279"

                                                
                                                
----------------------- debugLogs end: false-019279 [took: 3.616448074s] --------------------------------
helpers_test.go:175: Cleaning up "false-019279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-019279
--- PASS: TestNetworkPlugins/group/false (3.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (76.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3782559 start -p stopped-upgrade-225043 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3782559 start -p stopped-upgrade-225043 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (45.433391841s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3782559 -p stopped-upgrade-225043 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3782559 -p stopped-upgrade-225043 stop: (1.738406614s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-225043 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-225043 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (29.807658115s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (76.98s)

                                                
                                    
x
+
TestISOImage/Setup (29.58s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-856307 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-856307 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.584436085s)
--- PASS: TestISOImage/Setup (29.58s)

                                                
                                    
x
+
TestPause/serial/Start (64.01s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-892862 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-892862 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m4.006990225s)
--- PASS: TestPause/serial/Start (64.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-225043
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-225043: (1.031958321s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m20.914580282s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.91s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which curl"
E1202 20:54:49.012700  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which docker"
E1202 20:54:48.848509  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:48.854986  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:48.866717  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:48.888507  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:48.930426  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which socat"
E1202 20:54:48.053376  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/socat (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which VBoxControl"
E1202 20:54:47.761788  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (98.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1202 20:46:33.863025  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:47:12.168499  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m38.471147082s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (98.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-019279 "pgrep -a kubelet"
I1202 20:47:51.148288  147070 config.go:182] Loaded profile config "auto-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-019279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j7lls" [0f74cc6f-5d1c-4412-9d40-4639536b9b05] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j7lls" [0f74cc6f-5d1c-4412-9d40-4639536b9b05] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003996398s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-019279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-df2mv" [ab018661-d020-43bf-b65a-e321ae388774] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004566939s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m23.774568933s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m30.766867051s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-019279 "pgrep -a kubelet"
I1202 20:48:18.127520  147070 config.go:182] Loaded profile config "kindnet-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-019279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5wht4" [6aa2a4e6-739a-409e-890b-5b985ab41ccf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5wht4" [6aa2a4e6-739a-409e-890b-5b985ab41ccf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004666389s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-019279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m9.174363069s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5j9rt" [3d7157a0-8e9d-4633-b735-aea1a81e0b6f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004459915s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-019279 "pgrep -a kubelet"
I1202 20:49:43.991612  147070 config.go:182] Loaded profile config "calico-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-019279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tgtpf" [a4b0fa89-c22e-4cb6-a6b1-c3d7afe771a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tgtpf" [a4b0fa89-c22e-4cb6-a6b1-c3d7afe771a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005346017s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-019279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
I1202 20:49:48.553811  147070 config.go:182] Loaded profile config "custom-flannel-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m13.707499523s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-019279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ltkh7" [c2c044ea-03b5-4973-8344-e3237cd483a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ltkh7" [c2c044ea-03b5-4973-8344-e3237cd483a0] Running
I1202 20:49:53.971236  147070 config.go:182] Loaded profile config "enable-default-cni-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004200751s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-019279 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-019279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m7dn2" [4f1368d7-7b8d-4ee4-a38a-7b308037e49b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m7dn2" [4f1368d7-7b8d-4ee4-a38a-7b308037e49b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004019628s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-019279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-019279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-019279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-019279 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.166955483s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (71.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-695400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-695400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m11.526354848s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (71.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (102.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-141333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-141333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m42.316696623s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (102.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-sbld2" [f3fa6e68-ec30-4443-85d5-8ac1af065708] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004633111s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-019279 "pgrep -a kubelet"
I1202 20:51:08.383859  147070 config.go:182] Loaded profile config "flannel-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-019279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-clb4g" [d955af33-9134-413b-bdec-9fc1d514cc43] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-clb4g" [d955af33-9134-413b-bdec-9fc1d514cc43] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006015886s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-019279 "pgrep -a kubelet"
I1202 20:51:11.149970  147070 config.go:182] Loaded profile config "bridge-019279": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-019279 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lvfws" [980c153a-efbb-48f5-b66b-cb5d1abd4844] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lvfws" [980c153a-efbb-48f5-b66b-cb5d1abd4844] Running
E1202 20:51:16.934074  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00471667s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-019279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-019279 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-019279 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-695400 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [69c5ca18-9962-4bc5-8ba0-a45600a9a367] Pending
helpers_test.go:352: "busybox" [69c5ca18-9962-4bc5-8ba0-a45600a9a367] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1202 20:51:33.863764  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-945181/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [69c5ca18-9962-4bc5-8ba0-a45600a9a367] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0051379s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-695400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-213227 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-213227 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (57.755083067s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-875061 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-875061 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m13.553845971s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-695400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-695400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.049825646s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-695400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (85.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-695400 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-695400 --alsologtostderr -v=3: (1m25.083392657s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (85.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-141333 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7448abb1-37dd-4af8-8e69-686e6f028828] Pending
helpers_test.go:352: "busybox" [7448abb1-37dd-4af8-8e69-686e6f028828] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7448abb1-37dd-4af8-8e69-686e6f028828] Running
E1202 20:52:12.168563  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/addons-375150/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004764194s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-141333 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-141333 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-141333 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (78.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-141333 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-141333 --alsologtostderr -v=3: (1m18.655427542s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (78.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-213227 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [40712072-0e21-4ca2-ae37-7839494037b3] Pending
helpers_test.go:352: "busybox" [40712072-0e21-4ca2-ae37-7839494037b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [40712072-0e21-4ca2-ae37-7839494037b3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004951372s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-213227 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-213227 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-213227 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (87.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-213227 --alsologtostderr -v=3
E1202 20:52:51.391389  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:52:51.397868  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:52:51.409299  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:52:51.430752  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:52:51.472388  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:52:51.553941  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:52:51.715613  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-213227 --alsologtostderr -v=3: (1m27.438266136s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (87.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-875061 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [00dc9a61-4664-4e99-ab45-126038455386] Pending
E1202 20:52:52.037360  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:52:52.679762  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [00dc9a61-4664-4e99-ab45-126038455386] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1202 20:52:53.961050  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:52:56.523652  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [00dc9a61-4664-4e99-ab45-126038455386] Running
E1202 20:53:01.645517  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004178449s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-875061 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-875061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-875061 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (82.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-875061 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-875061 --alsologtostderr -v=3: (1m22.800377223s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (82.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-695400 -n old-k8s-version-695400
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-695400 -n old-k8s-version-695400: exit status 7 (62.25072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-695400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (41.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-695400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1202 20:53:11.887798  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:11.926339  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:11.932799  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:11.944338  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:11.966027  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:12.007548  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:12.089086  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:12.251151  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:12.572895  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:13.215002  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:14.497348  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:17.059194  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:22.181107  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:32.369536  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:53:32.423056  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-695400 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (41.326839364s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-695400 -n old-k8s-version-695400
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (41.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-141333 -n no-preload-141333
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-141333 -n no-preload-141333: exit status 7 (66.823343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-141333 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-141333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-141333 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (53.299898636s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-141333 -n no-preload-141333
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kjjkr" [757f454d-6ff8-4d95-839a-60d3260e7d38] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1202 20:53:52.904804  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kjjkr" [757f454d-6ff8-4d95-839a-60d3260e7d38] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004599098s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kjjkr" [757f454d-6ff8-4d95-839a-60d3260e7d38] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004523768s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-695400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-695400 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-695400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-695400 -n old-k8s-version-695400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-695400 -n old-k8s-version-695400: exit status 2 (254.176311ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-695400 -n old-k8s-version-695400
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-695400 -n old-k8s-version-695400: exit status 2 (219.838875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-695400 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-695400 -n old-k8s-version-695400
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-695400 -n old-k8s-version-695400
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (57.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-115071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-115071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (57.81846772s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (57.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-213227 -n embed-certs-213227
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-213227 -n embed-certs-213227: exit status 7 (64.237775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-213227 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (66.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-213227 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1202 20:54:13.331231  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-213227 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m5.892886898s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-213227 -n embed-certs-213227
E1202 20:55:18.777590  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (66.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-875061 -n default-k8s-diff-port-875061
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-875061 -n default-k8s-diff-port-875061: exit status 7 (68.868177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-875061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (69.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-875061 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-875061 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m9.030330065s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-875061 -n default-k8s-diff-port-875061
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (69.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-dbnnx" [24c02a57-52ad-47eb-b3da-1c6ecd10b558] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1202 20:54:30.830347  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/functional-007973/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-dbnnx" [24c02a57-52ad-47eb-b3da-1c6ecd10b558] Running
E1202 20:54:33.866666  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/kindnet-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.00431558s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-dbnnx" [24c02a57-52ad-47eb-b3da-1c6ecd10b558] Running
E1202 20:54:37.799302  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:37.805810  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:37.817208  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:37.838715  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:37.880220  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:37.961779  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:38.123390  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:38.445242  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:39.087327  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:40.369183  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005812623s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-141333 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-141333 image list --format=json
E1202 20:54:42.931147  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-141333 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-141333 -n no-preload-141333
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-141333 -n no-preload-141333: exit status 2 (234.153914ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-141333 -n no-preload-141333
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-141333 -n no-preload-141333: exit status 2 (235.544959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-141333 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-141333 -n no-preload-141333
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-141333 -n no-preload-141333
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.95s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "df -t ext4 /data | grep /data"
E1202 20:54:49.174765  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
E1202 20:54:50.138506  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
E1202 20:54:49.496981  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.21s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
--- PASS: TestISOImage/VersionJSON (0.21s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-856307 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)
E1202 20:54:53.981941  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:54.197478  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:54.203976  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:54.215400  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:54.237240  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:54.278800  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:54.360316  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:54.521899  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:54.843895  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:55.485395  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:56.767543  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:58.295807  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/calico-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:59.103939  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:54:59.329678  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:55:04.452041  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-115071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-115071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.293601783s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-115071 --alsologtostderr -v=3
E1202 20:55:09.345371  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:55:14.694248  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-115071 --alsologtostderr -v=3: (8.329080448s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-115071 -n newest-cni-115071
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-115071 -n newest-cni-115071: exit status 7 (85.09755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-115071 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-115071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-115071 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.16847053s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-115071 -n newest-cni-115071
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-88bqm" [e2ea3d67-f1d5-4054-bcf5-29f35e62ba2e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-88bqm" [e2ea3d67-f1d5-4054-bcf5-29f35e62ba2e] Running
E1202 20:55:29.827720  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/custom-flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.005618264s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-88bqm" [e2ea3d67-f1d5-4054-bcf5-29f35e62ba2e] Running
E1202 20:55:35.175583  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/enable-default-cni-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1202 20:55:35.253136  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/auto-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004388945s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-213227 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c4qfl" [2f9077ce-4a58-493a-8eca-4d5ba5b42f8e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c4qfl" [2f9077ce-4a58-493a-8eca-4d5ba5b42f8e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.003167654s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-213227 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-213227 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-213227 -n embed-certs-213227
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-213227 -n embed-certs-213227: exit status 2 (244.874783ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-213227 -n embed-certs-213227
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-213227 -n embed-certs-213227: exit status 2 (248.108177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-213227 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-213227 -n embed-certs-213227
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-213227 -n embed-certs-213227
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c4qfl" [2f9077ce-4a58-493a-8eca-4d5ba5b42f8e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006128482s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-875061 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-875061 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-875061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-875061 -n default-k8s-diff-port-875061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-875061 -n default-k8s-diff-port-875061: exit status 2 (231.95238ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-875061 -n default-k8s-diff-port-875061
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-875061 -n default-k8s-diff-port-875061: exit status 2 (229.777087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-875061 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-875061 -n default-k8s-diff-port-875061
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-875061 -n default-k8s-diff-port-875061
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-115071 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-115071 --alsologtostderr -v=1
E1202 20:56:03.482962  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-115071 -n newest-cni-115071
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-115071 -n newest-cni-115071: exit status 2 (224.579897ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-115071 -n newest-cni-115071
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-115071 -n newest-cni-115071: exit status 2 (218.940062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-115071 --alsologtostderr -v=1
E1202 20:56:04.764858  147070 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/flannel-019279/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-115071 -n newest-cni-115071
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-115071 -n newest-cni-115071
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.30s)

                                                
                                    

Test skip (51/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0.67
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.3
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
368 TestNetworkPlugins/group/kubenet 3.93
376 TestNetworkPlugins/group/cilium 4.21
385 TestStartStop/group/disable-driver-mounts 0.19
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1202 19:44:57.497470  147070 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
W1202 19:44:58.151045  147070 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
W1202 19:44:58.167685  147070 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 status code: 404
aaa_download_only_test.go:113: No preload image
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-375150 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-019279 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-019279" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:42:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.63:8443
name: cert-expiration-095611
contexts:
- context:
cluster: cert-expiration-095611
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:42:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-095611
name: cert-expiration-095611
current-context: ""
kind: Config
users:
- name: cert-expiration-095611
user:
client-certificate: /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/cert-expiration-095611/client.crt
client-key: /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/cert-expiration-095611/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-019279

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-019279"

                                                
                                                
----------------------- debugLogs end: kubenet-019279 [took: 3.745856766s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-019279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-019279
--- SKIP: TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-019279 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-019279" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21997-143119/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:42:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.63:8443
name: cert-expiration-095611
contexts:
- context:
cluster: cert-expiration-095611
extensions:
- extension:
last-update: Tue, 02 Dec 2025 20:42:30 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-095611
name: cert-expiration-095611
current-context: ""
kind: Config
users:
- name: cert-expiration-095611
user:
client-certificate: /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/cert-expiration-095611/client.crt
client-key: /home/jenkins/minikube-integration/21997-143119/.minikube/profiles/cert-expiration-095611/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-019279

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-019279" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-019279"

                                                
                                                
----------------------- debugLogs end: cilium-019279 [took: 4.032584283s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-019279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-019279
--- SKIP: TestNetworkPlugins/group/cilium (4.21s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-797755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-797755
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard